Stochastic Recursive Gradient
Stochastic Recursive Gradient (SRG) methods are a class of variance-reduced stochastic optimization algorithms aiming to accelerate convergence in machine learning and related fields by recursively estimating gradients. Current research focuses on improving the efficiency and theoretical guarantees of SRG algorithms, particularly in non-convex settings and for problems with large datasets, including exploring variants like SARAH and its probabilistic counterparts. These advancements are significant because they lead to faster training times and improved generalization performance in various applications, such as federated learning and bilevel optimization, while also providing stronger theoretical understanding of convergence rates and statistical properties.