Optimistic Gradient
Optimistic gradient methods are a class of algorithms designed to efficiently solve minimax optimization problems, frequently arising in machine learning applications like GAN training and reinforcement learning. Current research focuses on improving the convergence rates and robustness of these methods, particularly in stochastic and non-convex settings, exploring variations like optimistic gradient descent-ascent and incorporating techniques such as momentum and exponential moving averages to mitigate noise. These advancements are significant because they enhance the efficiency and stability of training complex models, leading to improved performance in various applications.
Papers
November 1, 2024
June 1, 2024
March 14, 2024
January 26, 2024
June 13, 2023
February 27, 2023
February 18, 2023
February 2, 2023
January 30, 2023
October 31, 2022
October 17, 2022
July 18, 2022
April 20, 2022