Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
Papers
November 16, 2024
November 11, 2024
November 7, 2024
November 5, 2024
November 4, 2024
October 22, 2024
October 21, 2024
October 14, 2024
October 5, 2024
October 2, 2024
September 24, 2024
September 14, 2024
September 11, 2024
August 20, 2024
August 16, 2024
July 17, 2024
July 15, 2024
June 23, 2024
June 20, 2024