Variance Reduced
Variance reduction techniques aim to accelerate optimization algorithms by reducing the variance of stochastic gradient estimates, leading to faster convergence in machine learning tasks. Current research focuses on improving the robustness and efficiency of variance-reduced methods like SVRG and its variants, exploring adaptive step sizes, second-order information incorporation, and extensions to non-convex and distributed settings, including minimax problems and Byzantine-robust scenarios. These advancements are significant because they enable more efficient training of large-scale machine learning models and improve the scalability of algorithms for real-world applications.
Papers
October 8, 2024
June 7, 2024
April 23, 2024
November 9, 2023
October 24, 2023
October 10, 2023
March 8, 2023
December 6, 2022
November 29, 2022
September 29, 2022