Smooth Stochastic Convex Optimization
Smooth stochastic convex optimization focuses on efficiently finding minima of convex functions using noisy gradient information, aiming for optimal convergence rates and reduced computational cost. Current research emphasizes developing parameter-free algorithms that adapt to unknown problem characteristics, analyzing the impact of step-size and iteration count on generalization, and establishing high-probability convergence bounds under various noise conditions, including heavy-tailed distributions. These advancements are crucial for improving the scalability and reliability of machine learning algorithms, particularly in large-scale applications where noisy data and limited prior knowledge are common.
Papers
April 7, 2024
March 31, 2024
February 16, 2024
November 23, 2023
November 3, 2023
July 4, 2023
March 19, 2023
February 7, 2023
June 10, 2021