Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
Papers
September 19, 2022
September 15, 2022
August 17, 2022
August 10, 2022
August 7, 2022
July 30, 2022
July 23, 2022
July 19, 2022
July 9, 2022
July 6, 2022
June 22, 2022
June 14, 2022
June 8, 2022
May 26, 2022
May 25, 2022
May 24, 2022
May 13, 2022
May 2, 2022
April 18, 2022