Gradient Based Algorithm
Gradient-based algorithms are fundamental to optimizing complex functions in machine learning and other fields, aiming to efficiently find minima or saddle points. Current research focuses on extending these methods to handle challenging scenarios like bilevel optimization (nested optimization problems) and non-convex objectives, often employing techniques such as Moreau envelopes and adaptive subspace searches to improve efficiency and convergence guarantees. These advancements are crucial for tackling increasingly complex machine learning tasks, including hyperparameter optimization, neural architecture search, and robust model training, leading to improved performance and efficiency in various applications.
Papers
June 14, 2024
May 16, 2024
March 7, 2024
February 13, 2024
December 6, 2023
November 24, 2023
September 9, 2023
September 5, 2023
July 25, 2023
July 1, 2023
June 9, 2023
June 5, 2023
May 25, 2023
December 28, 2022
November 22, 2022
November 2, 2022
March 27, 2022
February 11, 2022