Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
Papers
September 26, 2023
September 25, 2023
September 15, 2023
September 14, 2023
August 20, 2023
August 18, 2023
August 10, 2023
August 4, 2023
July 31, 2023
July 25, 2023
July 24, 2023
July 13, 2023
June 7, 2023
June 6, 2023
June 4, 2023
May 25, 2023
May 24, 2023
May 12, 2023