Momentum Based
Momentum-based optimization methods aim to accelerate the convergence of iterative algorithms by incorporating information from previous iterations, improving efficiency and potentially generalization performance. Current research focuses on enhancing robustness and efficiency in distributed and federated learning settings, addressing challenges like Byzantine failures and mitigating catastrophic forgetting in large language models, often through novel momentum-based algorithms and adaptive learning rate schemes. These advancements have significant implications for training large-scale machine learning models, particularly in resource-constrained environments and applications requiring high accuracy and stability.
Papers
October 18, 2024
October 10, 2024
September 13, 2024
August 22, 2024
July 30, 2024
July 27, 2024
July 7, 2024
June 25, 2024
June 12, 2024
June 10, 2024
May 30, 2024
May 21, 2024
March 18, 2024
March 9, 2024
February 15, 2024
December 21, 2023
October 6, 2023
September 5, 2023
August 22, 2023
July 2, 2023