Finite Sum
Finite-sum optimization focuses on minimizing the sum of multiple functions, a problem frequently encountered in machine learning and other data-driven fields. Current research emphasizes developing efficient algorithms, such as variance-reduced stochastic gradient methods and their accelerated variants, to handle large-scale datasets and distributed computing environments, often addressing challenges like asynchronous computations and heterogeneous worker performance. These advancements improve the speed and scalability of optimization, impacting various applications from training complex machine learning models to resource allocation in distributed systems. A key trend is the development of algorithms with provable convergence guarantees, even under challenging conditions like non-convexity and unreliable communication networks.