Minimax Optimization Problem

Minimax optimization, focusing on finding saddle points where one function is minimized and another is maximized, is a crucial problem across diverse fields like machine learning and control theory. Current research emphasizes developing efficient algorithms, such as stochastic gradient descent-ascent and extragradient methods, often tailored to specific problem structures (e.g., convex-concave, non-convex, strongly-concave) and incorporating techniques like variance reduction and adaptive learning rates to improve convergence. These advancements are driving progress in applications ranging from robust optimization and adversarial training of neural networks to federated learning and reinforcement learning, impacting the development of more reliable and efficient algorithms for complex systems.

Papers