Primal Dual
Primal-dual methods are optimization techniques that solve problems by iteratively updating both primal and dual variables, aiming to find a saddle point satisfying both the objective function and constraints. Current research focuses on improving the efficiency and robustness of these methods, particularly through adaptive algorithms that eliminate the need for line searches and handle non-Euclidean norms, as well as extensions to federated learning and constrained reinforcement learning settings. These advancements are significant for tackling large-scale optimization problems in diverse fields, including machine learning, control systems, and network optimization, leading to improved algorithm performance and broader applicability.
Papers
A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression
Deepak Maurya, Jean Honorio
Efficient First-order Methods for Convex Optimization with Strongly Convex Function Constraints
Zhenwei Lin, Qi Deng
BTS: Bifold Teacher-Student in Semi-Supervised Learning for Indoor Two-Room Presence Detection Under Time-Varying CSI
Li-Hsiang Shen, Kai-Jui Chen, An-Hung Hsiao, Kai-Ten Feng