Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
Papers
April 6, 2022
April 2, 2022
February 24, 2022
February 21, 2022
February 10, 2022
February 9, 2022
February 7, 2022
January 26, 2022
January 19, 2022
January 5, 2022
December 20, 2021
December 7, 2021
November 29, 2021
November 25, 2021
November 10, 2021