Variance Reduced

Variance reduction techniques aim to accelerate optimization algorithms by reducing the variance of stochastic gradient estimates, leading to faster convergence in machine learning tasks. Current research focuses on improving the robustness and efficiency of variance-reduced methods like SVRG and its variants, exploring adaptive step sizes, second-order information incorporation, and extensions to non-convex and distributed settings, including minimax problems and Byzantine-robust scenarios. These advancements are significant because they enable more efficient training of large-scale machine learning models and improve the scalability of algorithms for real-world applications.

Papers