Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
Papers
February 6, 2024
February 2, 2024
January 19, 2024
January 18, 2024
January 17, 2024
January 15, 2024
December 24, 2023
December 14, 2023
December 6, 2023
November 28, 2023
November 22, 2023
November 7, 2023
October 31, 2023
October 30, 2023
October 18, 2023
October 17, 2023
October 10, 2023
October 4, 2023
October 1, 2023