Gradient Descent Ascent
Gradient descent ascent (GDA) methods address minimax optimization problems, aiming to find saddle points where one function is minimized and another is maximized simultaneously. Current research focuses on improving GDA's convergence speed and stability, particularly by exploring algorithmic variations like alternating updates, smoothing techniques, and optimistic gradient methods, often within the context of generative adversarial networks (GANs) and federated learning. These advancements are significant because efficient and stable solutions to minimax problems are crucial for various machine learning applications, including GAN training, reinforcement learning, and robust optimization.
Papers
March 14, 2024
February 16, 2024
February 15, 2024
February 12, 2024
November 2, 2023
October 31, 2023
October 20, 2023
June 13, 2023
May 14, 2023
February 2, 2023
December 26, 2022
December 3, 2022
November 24, 2022
October 31, 2022
October 2, 2022
July 3, 2022
June 2, 2022
May 27, 2022
April 20, 2022