Stochastic Proximal Point

Stochastic proximal point methods (SPPMs) are iterative optimization algorithms offering robust alternatives to standard stochastic gradient descent (SGD), particularly in challenging scenarios like non-convex problems and federated learning. Current research emphasizes developing and analyzing SPPM variants incorporating techniques such as variance reduction, momentum, and median gradient estimation to improve convergence rates and stability, especially under weaker assumptions on the objective function. These advancements are significant because they enhance the efficiency and reliability of optimization in various machine learning applications, including distributed and robust training, leading to improved model performance and reduced computational costs.

Papers