Federated Minimax Optimization
Federated minimax optimization addresses the challenge of training models with adversarial or constrained objectives across decentralized datasets, preserving data privacy while mitigating issues like data heterogeneity and fairness. Current research focuses on developing communication-efficient algorithms, such as those employing gradient tracking or smoothing techniques, and analyzing their convergence rates under various non-convex settings, including nonconvex-strongly-concave and nonconvex-nonconcave scenarios. These advancements are crucial for improving the scalability and robustness of federated learning in applications like GAN training and fair classification, where minimax formulations are prevalent. The ultimate goal is to achieve efficient and provably convergent algorithms for solving these complex distributed optimization problems.