Robbins Monro
Robbins-Monro (RM) algorithms are iterative methods for finding the root of a function known only through noisy observations, a fundamental problem in stochastic optimization. Current research focuses on improving RM's convergence speed, particularly through incorporating prior information and developing proximal variants like implicit stochastic gradient descent (ISGD) with associated statistical inference methods. Extensions to Riemannian manifolds broaden the applicability of RM to non-Euclidean spaces relevant to optimization and game theory. These advancements enhance the algorithm's efficiency and reliability across diverse applications, offering improved solutions for problems with limited data or high noise levels.
Papers
October 21, 2024
January 6, 2024
June 25, 2022