Hyperparameter Optimization
Hyperparameter optimization (HPO) focuses on automatically finding the best settings for machine learning models, improving their performance and efficiency. Current research emphasizes developing more efficient and statistically robust HPO methods, often employing Bayesian optimization, bandit algorithms, or evolutionary strategies, and adapting these techniques for specific applications like reinforcement learning and neural architecture search. This field is crucial for advancing machine learning across various domains, as effective HPO reduces computational costs, improves model accuracy, and enables the use of more complex models in resource-constrained settings. The development of standardized benchmarks and improved analysis techniques are also key areas of focus.
Papers
A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning
Minyoung Kim, Timothy M. Hospedales
Predicting from Strings: Language Model Embeddings for Bayesian Optimization
Tung Nguyen, Qiuyi Zhang, Bangding Yang, Chansoo Lee, Jorg Bornschein, Yingjie Miao, Sagi Perel, Yutian Chen, Xingyou Song