Decentralized Minimax

Decentralized minimax optimization focuses on solving minimax problems—finding a saddle point where one function is minimized and another is maximized—across a network of distributed agents without a central coordinator. Current research emphasizes developing efficient algorithms, such as those incorporating adaptive stepsizes, variance reduction, and gradient tracking, to achieve near-optimal convergence rates even under challenging conditions like nonconvex-nonconcave objectives and data heterogeneity. This area is crucial for advancing distributed machine learning applications, including federated learning and multi-agent reinforcement learning, by enabling robust and scalable training of models in decentralized environments.

Papers