Finite Sum Optimization
Finite-sum optimization focuses on efficiently minimizing functions expressed as the average of many individual functions, a common problem in machine learning. Current research emphasizes developing faster and more robust algorithms, particularly decentralized methods for distributed computing and adaptive techniques that adjust learning rates dynamically, including variants of gradient descent and quasi-Newton methods. These advancements aim to improve the scalability and efficiency of training large models, impacting areas like deep learning and large-scale data analysis by enabling faster convergence and reduced computational costs. Furthermore, research explores quantum algorithms to potentially achieve even greater speedups.
Papers
November 11, 2024
October 5, 2024
August 19, 2024
June 5, 2024
February 4, 2024
March 31, 2023
October 25, 2022
April 22, 2022
March 28, 2022
March 17, 2022
February 9, 2022