Primal Dual
Primal-dual methods are optimization techniques that solve problems by iteratively updating both primal and dual variables, aiming to find a saddle point satisfying both the objective function and constraints. Current research focuses on improving the efficiency and robustness of these methods, particularly through adaptive algorithms that eliminate the need for line searches and handle non-Euclidean norms, as well as extensions to federated learning and constrained reinforcement learning settings. These advancements are significant for tackling large-scale optimization problems in diverse fields, including machine learning, control systems, and network optimization, leading to improved algorithm performance and broader applicability.
Papers
On the Complexity of a Practical Primal-Dual Coordinate Method
Ahmet Alacaoglu, Volkan Cevher, Stephen J. Wright
Multiblock ADMM for nonsmooth nonconvex optimization with nonlinear coupling constraints
Le Thi Khanh Hien, Dimitri Papadimitriou
Lifted Primal-Dual Method for Bilinearly Coupled Smooth Minimax Optimization
Kiran Koshy Thekumparampil, Niao He, Sewoong Oh