Accelerated Convergence
Accelerated convergence in optimization focuses on developing algorithms that reach optimal solutions significantly faster than traditional methods. Current research emphasizes continuous-time models, particularly extensions of Nesterov's accelerated gradient methods and variants like heavy-ball momentum, analyzing their convergence rates through techniques such as energy conservation and Lyapunov functions, and exploring their application in distributed and stochastic settings. These advancements are crucial for tackling large-scale optimization problems in machine learning, control systems, and scientific computing, offering substantial improvements in computational efficiency and resource utilization.
Papers
September 28, 2024
September 2, 2024
July 30, 2024
May 17, 2024
March 7, 2024
December 22, 2023
August 28, 2023
May 8, 2023
November 9, 2022
August 8, 2022
May 27, 2022
May 25, 2022