Stochastic Convex Optimization

Stochastic convex optimization focuses on efficiently finding the minimum of a convex function whose value is only known through noisy samples, a common problem in machine learning. Current research emphasizes developing algorithms with optimal convergence rates under various constraints, including differential privacy requirements, heavy-tailed data, and limited computational resources; adaptive gradient methods and variance reduction techniques are prominent approaches. These advancements are crucial for improving the scalability and reliability of machine learning models while addressing privacy concerns and handling real-world data complexities.

Papers