Minimax Optimization

Minimax optimization focuses on finding saddle points of functions, crucial for solving problems involving competing objectives or adversarial scenarios. Current research emphasizes developing efficient algorithms, particularly for nonconvex-concave and nonconvex-nonconcave settings, often incorporating techniques like gradient descent-ascent, optimistic gradient methods, and variance reduction within both centralized and decentralized (federated learning) frameworks. These advancements are driving progress in diverse applications, including generative adversarial networks, robust machine learning, and reinforcement learning, by improving both the speed and robustness of optimization processes.

Papers