Online Stochastic
Online stochastic optimization focuses on developing algorithms that efficiently learn optimal decisions in dynamic environments where data arrives sequentially and is inherently uncertain. Current research emphasizes gradient-based methods, including online gradient descent and its variants, often adapted for specific problem structures like quasar-convexity or incorporating prior knowledge (e.g., approximate system dynamics). These advancements are crucial for addressing challenges in diverse fields, such as robotics, energy management, and online advertising, where real-time decision-making under uncertainty is paramount. The development of robust and efficient algorithms with provable regret bounds remains a central theme.
Papers
July 4, 2024
April 8, 2024
April 5, 2024
September 18, 2023
September 15, 2023
May 6, 2023
April 14, 2023
February 18, 2023
January 21, 2023
December 6, 2022
September 25, 2022
July 23, 2022
June 20, 2022
May 25, 2022
March 31, 2022