Stochastic Convex Optimization
Stochastic convex optimization focuses on efficiently finding the minimum of a convex function whose value is only known through noisy samples, a common problem in machine learning. Current research emphasizes developing algorithms with optimal convergence rates under various constraints, including differential privacy requirements, heavy-tailed data, and limited computational resources; adaptive gradient methods and variance reduction techniques are prominent approaches. These advancements are crucial for improving the scalability and reliability of machine learning models while addressing privacy concerns and handling real-world data complexities.
Papers
Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond
Matan Schliserman, Tomer Koren
Benign Underfitting of Stochastic Gradient Descent
Tomer Koren, Roi Livni, Yishay Mansour, Uri Sherman
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
Idan Amir, Roi Livni, Nathan Srebro