Adaptive Variance

Adaptive variance reduction techniques in stochastic optimization aim to improve the efficiency and robustness of algorithms by dynamically adjusting the variance of gradient estimates during the optimization process. Current research focuses on developing adaptive methods with weaker assumptions, optimal convergence rates, and applicability to various problem settings, including non-convex functions, compositional optimization, and Riemannian manifolds; prominent approaches include adaptive STORM variants, loopless SVRG methods, and covariance-adaptive algorithms. These advancements are significant because they enable faster and more reliable solutions for a wide range of optimization problems in machine learning, data science, and other fields where large-scale datasets and complex models are prevalent.

Papers