Hyperparameter Optimization
Hyperparameter optimization (HPO) focuses on automatically finding the best settings for machine learning models, improving their performance and efficiency. Current research emphasizes developing more efficient and statistically robust HPO methods, often employing Bayesian optimization, bandit algorithms, or evolutionary strategies, and adapting these techniques for specific applications like reinforcement learning and neural architecture search. This field is crucial for advancing machine learning across various domains, as effective HPO reduces computational costs, improves model accuracy, and enables the use of more complex models in resource-constrained settings. The development of standardized benchmarks and improved analysis techniques are also key areas of focus.
Papers
Shrink-Perturb Improves Architecture Mixing during Population Based Training for Neural Architecture Search
Alexander Chebykin, Arkadiy Dushatskiy, Tanja Alderliesten, Peter A. N. Bosman
Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?
Romain Egele, Isabelle Guyon, Yixuan Sun, Prasanna Balaprakash
Intelligent sampling for surrogate modeling, hyperparameter optimization, and data analysis
Chandrika Kamath
Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels
Alexander Immer, Tycho F. A. van der Ouderaa, Mark van der Wilk, Gunnar Rätsch, Bernhard Schölkopf
Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
Sebastian Pineda Arango, Fabio Ferreira, Arlind Kadra, Frank Hutter, Josif Grabocka