Stochastic Extragradient

Stochastic Extragradient (SEG) methods are powerful algorithms used to solve min-max optimization and variational inequality problems, prevalent in machine learning. Current research focuses on improving SEG's convergence rates and expanding its applicability to various problem classes, including non-monotone and high-dimensional settings, often through refined analyses of different sampling strategies (e.g., random reshuffling) and step-size rules. These advancements are significant because they enhance the efficiency and robustness of SEG for tackling complex problems in machine learning and other fields, such as semidefinite optimization. The development of theoretical guarantees under weaker assumptions, like the expected residual condition, is a key area of progress.

Papers