Compositional Minimax

Compositional minimax optimization addresses the challenge of finding saddle points in objective functions with nested structures, a common problem in machine learning applications like domain adaptation and AUC maximization. Current research focuses on developing efficient stochastic algorithms, such as gradient descent ascent methods with momentum and variance reduction techniques, to solve these often non-convex and non-concave problems, particularly in decentralized and federated learning settings. These advancements aim to improve the scalability and convergence guarantees of training complex models while addressing challenges posed by imbalanced data and compositional structures, ultimately leading to more robust and efficient machine learning systems.

Papers