Decentralized Minimax
Decentralized minimax optimization focuses on solving minimax problems—finding a saddle point where one function is minimized and another is maximized—across a network of distributed agents without a central coordinator. Current research emphasizes developing efficient algorithms, such as those incorporating adaptive stepsizes, variance reduction, and gradient tracking, to achieve near-optimal convergence rates even under challenging conditions like nonconvex-nonconcave objectives and data heterogeneity. This area is crucial for advancing distributed machine learning applications, including federated learning and multi-agent reinforcement learning, by enabling robust and scalable training of models in decentralized environments.
Papers
June 5, 2024
May 25, 2024
May 2, 2024
February 1, 2024
October 31, 2023
April 24, 2023
April 21, 2023
April 5, 2023