Hyperparameter Free
Hyperparameter-free methods aim to automate the optimization process in machine learning models, eliminating the need for manual tuning of parameters like learning rates. Current research focuses on developing algorithms that automatically adapt to different datasets and tasks, including Bayesian optimization, adaptive step size methods for stochastic gradient descent, and novel approaches to learning rate adaptation. This research is significant because it simplifies model training, improves efficiency, and enhances the reproducibility and generalizability of machine learning across diverse applications, such as medical image synthesis and natural language processing.
Papers
September 26, 2024
May 14, 2024
April 9, 2024
January 5, 2024
November 28, 2023
February 1, 2023
January 18, 2023
December 6, 2022