Momentum Gradient Descent
Momentum gradient descent is an optimization algorithm that accelerates the convergence of gradient descent by incorporating past gradients into the current update, leading to faster and potentially more efficient training of machine learning models. Current research focuses on understanding its implicit regularization properties, particularly in the context of neural networks and minimax optimization problems, and extending its application to various settings, including distributed and federated learning, as well as Lie group optimization. These advancements improve the efficiency and robustness of training complex models, impacting diverse fields from natural language processing to robust regression and point cloud processing.
Papers
April 17, 2024
April 14, 2024
March 8, 2024
November 25, 2023
October 24, 2023
May 20, 2023
April 21, 2023
March 7, 2023
September 28, 2022
February 21, 2022