Hyperparameter Optimization
Hyperparameter optimization (HPO) focuses on automatically finding the best settings for machine learning models, improving their performance and efficiency. Current research emphasizes developing more efficient and statistically robust HPO methods, often employing Bayesian optimization, bandit algorithms, or evolutionary strategies, and adapting these techniques for specific applications like reinforcement learning and neural architecture search. This field is crucial for advancing machine learning across various domains, as effective HPO reduces computational costs, improves model accuracy, and enables the use of more complex models in resource-constrained settings. The development of standardized benchmarks and improved analysis techniques are also key areas of focus.
Papers
Low-Variance Gradient Estimation in Unrolled Computation Graphs with ES-Single
Paul Vicol, Zico Kolter, Kevin Swersky
Tree-Structured Parzen Estimator: Understanding Its Algorithm Components and Their Roles for Better Empirical Performance
Shuhei Watanabe
Natural Evolution Strategy for Mixed-Integer Black-Box Optimization
Koki Ikeda, Isao Ono
Deep Ranking Ensembles for Hyperparameter Optimization
Abdus Salam Khazi, Sebastian Pineda Arango, Josif Grabocka
Hyperparameter optimization, quantum-assisted model performance prediction, and benchmarking of AI-based High Energy Physics workloads using HPC
Eric Wulff, Maria Girone, David Southwick, Juan Pablo García Amboage, Eduard Cuba