Hyperparameter Optimization
Hyperparameter optimization (HPO) focuses on automatically finding the best settings for machine learning models, improving their performance and efficiency. Current research emphasizes developing more efficient and statistically robust HPO methods, often employing Bayesian optimization, bandit algorithms, or evolutionary strategies, and adapting these techniques for specific applications like reinforcement learning and neural architecture search. This field is crucial for advancing machine learning across various domains, as effective HPO reduces computational costs, improves model accuracy, and enables the use of more complex models in resource-constrained settings. The development of standardized benchmarks and improved analysis techniques are also key areas of focus.
Papers
Shrink-Perturb Improves Architecture Mixing during Population Based Training for Neural Architecture Search
Alexander Chebykin, Arkadiy Dushatskiy, Tanja Alderliesten, Peter A. N. Bosman
Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?
Romain Egele, Isabelle Guyon, Yixuan Sun, Prasanna Balaprakash