Momentum Based
Momentum-based optimization methods aim to accelerate the convergence of iterative algorithms by incorporating information from previous iterations, improving efficiency and potentially generalization performance. Current research focuses on enhancing robustness and efficiency in distributed and federated learning settings, addressing challenges like Byzantine failures and mitigating catastrophic forgetting in large language models, often through novel momentum-based algorithms and adaptive learning rate schemes. These advancements have significant implications for training large-scale machine learning models, particularly in resource-constrained environments and applications requiring high accuracy and stability.
Papers
December 31, 2024
December 25, 2024
December 22, 2024
December 19, 2024
November 29, 2024
November 25, 2024
October 18, 2024
October 10, 2024
September 13, 2024
August 22, 2024
July 30, 2024
July 27, 2024
July 7, 2024
June 25, 2024
June 12, 2024
June 10, 2024
May 30, 2024
May 21, 2024