Online Convex Optimization
Online convex optimization (OCO) focuses on iteratively minimizing a sequence of convex functions, aiming to minimize cumulative loss against an adversary that may choose the functions adaptively. Current research emphasizes developing efficient algorithms, such as variants of gradient descent and Frank-Wolfe methods, that handle diverse settings including sparsity, outliers, delayed feedback, and constraints (both static and time-varying), often leveraging techniques from control theory and compressive sensing. These advancements are significant for various applications, including resource allocation in distributed systems, robust machine learning, and adaptive control of dynamic systems, by providing theoretically sound and computationally efficient solutions to sequential decision-making problems under uncertainty.
Papers
Distributed Online Convex Optimization with Adversarial Constraints: Reduced Cumulative Constraint Violation Bounds under Slater's Condition
Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Yiguang Hong, Tianyou Chai, Karl H. Johansson
Mechanic: A Learning Rate Tuner
Ashok Cutkosky, Aaron Defazio, Harsh Mehta