Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
Papers
June 4, 2024
May 31, 2024
May 29, 2024
May 25, 2024
May 22, 2024
May 16, 2024
May 8, 2024
March 25, 2024
March 24, 2024
March 20, 2024
March 17, 2024
March 11, 2024
March 5, 2024
March 1, 2024
February 23, 2024
February 20, 2024
February 16, 2024
February 14, 2024