Stochastic Optimization
Stochastic optimization focuses on finding optimal solutions for problems involving uncertainty, aiming to minimize expected costs or maximize expected rewards. Current research emphasizes developing efficient algorithms, such as variants of stochastic gradient descent (SGD), that handle diverse challenges like asynchronous parallel computation, heavy-tailed noise, and biased oracles, often incorporating techniques like variance reduction and adaptive learning rates. These advancements are crucial for improving the scalability and robustness of machine learning models and optimization methods across various fields, including deep learning, reinforcement learning, and operations research.
129papers
Papers
April 3, 2025
March 31, 2025
March 20, 2025
February 17, 2025
February 16, 2025
February 12, 2025
January 31, 2025
January 16, 2025
January 14, 2025
December 27, 2024
December 18, 2024
December 14, 2024