Minimax Optimization
Minimax optimization focuses on finding saddle points of functions, crucial for solving problems involving competing objectives or adversarial scenarios. Current research emphasizes developing efficient algorithms, particularly for nonconvex-concave and nonconvex-nonconcave settings, often incorporating techniques like gradient descent-ascent, optimistic gradient methods, and variance reduction within both centralized and decentralized (federated learning) frameworks. These advancements are driving progress in diverse applications, including generative adversarial networks, robust machine learning, and reinforcement learning, by improving both the speed and robustness of optimization processes.
Papers
May 17, 2023
March 3, 2023
February 8, 2023
February 4, 2023
December 10, 2022
December 5, 2022
October 31, 2022
September 22, 2022
July 29, 2022
June 9, 2022
May 23, 2022
May 4, 2022
April 11, 2022
March 12, 2022
March 9, 2022
February 14, 2022
February 10, 2022
December 20, 2021