Variance Reduced Stochastic
Variance-reduced stochastic methods aim to accelerate optimization algorithms by reducing the variance in gradient estimates, leading to faster convergence in machine learning and related fields. Current research focuses on extending these techniques to complex settings like compositional minimax optimization, decentralized optimization with constraints (e.g., orthogonality), and saddle point problems, often employing novel algorithms incorporating momentum, adaptive learning rates, and second-order information (e.g., Newton methods). These advancements improve efficiency in training large-scale models and solving challenging optimization problems, impacting areas such as reinforcement learning, robust empirical risk minimization, and scientific computing. The development of instance-dependent convergence bounds further enhances the theoretical understanding and practical applicability of these methods.