Accelerated Gradient Method
Accelerated gradient methods aim to speed up the convergence of gradient descent algorithms used to minimize objective functions, a crucial task in many scientific and engineering fields. Current research focuses on extending these methods to handle non-convex and non-smooth problems, often employing techniques like Nesterov's acceleration, Anderson acceleration, and preconditioning, and analyzing their performance through continuous-time models and novel convergence analyses. These advancements are significant because they improve the efficiency of optimization in machine learning, particularly for training deep neural networks and solving large-scale problems where standard gradient descent is too slow.
Papers
September 30, 2024
September 2, 2024
March 12, 2024
March 8, 2024
February 12, 2024
December 6, 2023
November 3, 2023
October 24, 2023
October 6, 2023
September 17, 2023
July 13, 2023
June 2, 2023
October 22, 2022
August 8, 2022
April 18, 2022
March 29, 2022
January 27, 2022
January 20, 2022
December 9, 2021