Related Hyperparameters
Related hyperparameters in machine learning encompass the challenge of optimally configuring multiple interdependent settings within algorithms, significantly impacting model performance and efficiency. Current research focuses on automating hyperparameter optimization (HPO) across diverse models, including deep neural networks, reinforcement learning agents, and large language models, employing techniques like Bayesian optimization, evolutionary algorithms, and even large language models themselves for automated tuning. Effective HPO is crucial for improving model accuracy, generalizability, and resource efficiency, ultimately accelerating progress in various machine learning applications and fostering more robust and reliable model development.
Papers
Reward driven workflows for unsupervised explainable analysis of phases and ferroic variants from atomically resolved imaging data
Kamyar Barakati, Yu Liu, Chris Nelson, Maxim A. Ziatdinov, Xiaohang Zhang, Ichiro Takeuchi, Sergei V. Kalinin
Tailoring the Hyperparameters of a Wide-Kernel Convolutional Neural Network to Fit Different Bearing Fault Vibration Datasets
Dan Hudson, Jurgen van den Hoogen, Martin Atzmueller