Momentum Based
Momentum-based optimization methods aim to accelerate the convergence of iterative algorithms by incorporating information from previous iterations, improving efficiency and potentially generalization performance. Current research focuses on enhancing robustness and efficiency in distributed and federated learning settings, addressing challenges like Byzantine failures and mitigating catastrophic forgetting in large language models, often through novel momentum-based algorithms and adaptive learning rate schemes. These advancements have significant implications for training large-scale machine learning models, particularly in resource-constrained environments and applications requiring high accuracy and stability.
Papers
March 9, 2024
February 15, 2024
December 21, 2023
October 6, 2023
September 5, 2023
August 22, 2023
July 2, 2023
June 8, 2023
May 12, 2023
April 23, 2023
April 14, 2023
April 9, 2023
March 14, 2023
March 1, 2023
February 20, 2023
February 5, 2023
November 17, 2022
September 24, 2022
September 8, 2022