Hyperparameter Optimization
Hyperparameter optimization (HPO) focuses on automatically finding the best settings for machine learning models, improving their performance and efficiency. Current research emphasizes developing more efficient and statistically robust HPO methods, often employing Bayesian optimization, bandit algorithms, or evolutionary strategies, and adapting these techniques for specific applications like reinforcement learning and neural architecture search. This field is crucial for advancing machine learning across various domains, as effective HPO reduces computational costs, improves model accuracy, and enables the use of more complex models in resource-constrained settings. The development of standardized benchmarks and improved analysis techniques are also key areas of focus.
Papers
Iterative Deepening Hyperband
Jasmin Brandt, Marcel Wever, Dimitrios Iliadis, Viktor Bengs, Eyke Hüllermeier
Scaling Laws for Hyperparameter Optimization
Arlind Kadra, Maciej Janowski, Martin Wistuba, Josif Grabocka
HOAX: A Hyperparameter Optimization Algorithm Explorer for Neural Networks
Albert Thie, Maximilian F. S. J. Menger, Shirin Faraji