Finite Sum Minimization

Finite-sum minimization aims to efficiently find the minimum of a function represented as a sum of individual component functions, a common problem in machine learning and other fields. Current research focuses on developing and analyzing stochastic gradient methods, including variance reduction techniques and adaptive step-size strategies, to improve convergence rates and reduce computational cost, particularly for non-convex and constrained problems. These advancements are significant because they enable the efficient training of large-scale models and the solution of complex optimization problems in various applications, impacting fields like machine learning and data analysis.

Papers