Minimax Optimization
Minimax optimization focuses on finding saddle points of functions, crucial for solving problems involving competing objectives or adversarial scenarios. Current research emphasizes developing efficient algorithms, particularly for nonconvex-concave and nonconvex-nonconcave settings, often incorporating techniques like gradient descent-ascent, optimistic gradient methods, and variance reduction within both centralized and decentralized (federated learning) frameworks. These advancements are driving progress in diverse applications, including generative adversarial networks, robust machine learning, and reinforcement learning, by improving both the speed and robustness of optimization processes.
Papers
November 14, 2024
June 4, 2024
May 7, 2024
May 2, 2024
May 1, 2024
February 19, 2024
February 16, 2024
January 8, 2024
December 16, 2023
December 2, 2023
November 28, 2023
November 8, 2023
November 2, 2023
October 26, 2023
August 18, 2023
August 9, 2023
July 29, 2023
May 30, 2023
May 26, 2023