Stochastic Gradient Descent Ascent

Stochastic Gradient Descent Ascent (SGDA) is a family of algorithms designed to solve minimax optimization problems, which arise frequently in machine learning applications like Generative Adversarial Networks (GANs) and reinforcement learning. Current research focuses on improving the convergence rates and stability of SGDA, particularly addressing oscillatory behavior and the challenges posed by non-convex and non-concave objective functions. This involves developing novel algorithms like dissipative SGDA and variance-reduced methods, as well as analyzing the impact of alternating updates and adaptive step sizes. These advancements are crucial for enhancing the efficiency and reliability of training complex machine learning models.

Papers