Stochastic Gradient Descent Ascent
Stochastic Gradient Descent Ascent (SGDA) is a family of algorithms designed to solve minimax optimization problems, which arise frequently in machine learning applications like Generative Adversarial Networks (GANs) and reinforcement learning. Current research focuses on improving the convergence rates and stability of SGDA, particularly addressing oscillatory behavior and the challenges posed by non-convex and non-concave objective functions. This involves developing novel algorithms like dissipative SGDA and variance-reduced methods, as well as analyzing the impact of alternating updates and adaptive step sizes. These advancements are crucial for enhancing the efficiency and reliability of training complex machine learning models.
Papers
April 18, 2024
March 14, 2024
February 16, 2024
June 28, 2023
April 13, 2023
December 6, 2022
December 3, 2022
October 31, 2022
October 2, 2022
July 3, 2022
March 9, 2022
February 28, 2022
February 19, 2022
February 15, 2022
January 22, 2022