Accelerated Gradient Descent
Accelerated gradient descent (AGD) methods aim to improve the speed and efficiency of finding optimal solutions in various optimization problems, particularly prevalent in machine learning and related fields. Current research focuses on enhancing AGD's performance through techniques like preconditioning, adaptive learning rates, and momentum, as well as exploring its application in diverse contexts such as constrained optimization, distributed systems, and deep neural network training. These advancements are significant because they lead to faster training of complex models, improved generalization performance, and more efficient solutions for a wide range of scientific and engineering applications.
Papers
October 10, 2024
October 2, 2024
September 30, 2024
September 26, 2024
September 13, 2024
April 3, 2024
March 8, 2024
February 15, 2024
November 23, 2023
October 18, 2023
July 4, 2023
June 16, 2023
April 28, 2023
February 10, 2023
December 12, 2022
November 26, 2022
November 3, 2022
October 17, 2022
September 19, 2022