Related Hyperparameters
Related hyperparameters in machine learning encompass the challenge of optimally configuring multiple interdependent settings within algorithms, significantly impacting model performance and efficiency. Current research focuses on automating hyperparameter optimization (HPO) across diverse models, including deep neural networks, reinforcement learning agents, and large language models, employing techniques like Bayesian optimization, evolutionary algorithms, and even large language models themselves for automated tuning. Effective HPO is crucial for improving model accuracy, generalizability, and resource efficiency, ultimately accelerating progress in various machine learning applications and fostering more robust and reliable model development.
Papers
HyperImpute: Generalized Iterative Imputation with Automatic Model Selection
Daniel Jarrett, Bogdan Cebere, Tennison Liu, Alicia Curth, Mihaela van der Schaar
Multi-Objective Hyperparameter Optimization in Machine Learning -- An Overview
Florian Karl, Tobias Pielok, Julia Moosbauer, Florian Pfisterer, Stefan Coors, Martin Binder, Lennart Schneider, Janek Thomas, Jakob Richter, Michel Lang, Eduardo C. Garrido-Merchán, Juergen Branke, Bernd Bischl