Finite Sum Minimization
Finite-sum minimization aims to efficiently find the minimum of a function represented as a sum of individual component functions, a common problem in machine learning and other fields. Current research focuses on developing and analyzing stochastic gradient methods, including variance reduction techniques and adaptive step-size strategies, to improve convergence rates and reduce computational cost, particularly for non-convex and constrained problems. These advancements are significant because they enable the efficient training of large-scale models and the solution of complex optimization problems in various applications, impacting fields like machine learning and data analysis.
Papers
June 7, 2024
October 18, 2023
April 23, 2023
March 13, 2023
November 3, 2022
August 8, 2022
July 17, 2022
June 14, 2022
June 6, 2022
June 1, 2022
February 7, 2022
November 6, 2021