Finite Sum Optimization

Finite-sum optimization focuses on efficiently minimizing functions expressed as the average of many individual functions, a common problem in machine learning. Current research emphasizes developing faster and more robust algorithms, particularly decentralized methods for distributed computing and adaptive techniques that adjust learning rates dynamically, including variants of gradient descent and quasi-Newton methods. These advancements aim to improve the scalability and efficiency of training large models, impacting areas like deep learning and large-scale data analysis by enabling faster convergence and reduced computational costs. Furthermore, research explores quantum algorithms to potentially achieve even greater speedups.

Papers