Stochastic Proximal Gradient
Stochastic proximal gradient methods extend stochastic gradient descent to handle optimization problems with non-smooth regularizers, crucial for many machine learning tasks. Current research focuses on improving convergence rates and sample complexity, particularly for non-convex problems, through techniques like variance reduction (e.g., using SPIDER or SARAH algorithms) and momentum methods (e.g., Polyak momentum). These advancements are significant because they enable efficient training of complex models in various applications, including reinforcement learning and robust optimization, by addressing challenges posed by noisy data and non-convex objective functions.
Papers
March 5, 2024
January 23, 2024
December 4, 2023
September 18, 2023
May 10, 2023
January 2, 2023
August 22, 2022
June 14, 2022
June 8, 2022