Finite Time Convergence
Finite-time convergence focuses on developing algorithms that guarantee reaching a solution within a predetermined timeframe, rather than relying on asymptotic convergence. Current research emphasizes achieving this in various contexts, including reinforcement learning (using actor-critic methods and temporal difference learning), multi-agent systems, and optimization problems (employing stochastic gradient descent variants and model predictive control). This research is significant because finite-time guarantees enhance the reliability and efficiency of algorithms across diverse fields, from robotics and control systems to machine learning and distributed optimization.
Papers
Finite-Time Error Analysis of Online Model-Based Q-Learning with a Relaxed Sampling Model
Han-Dong Lim, HyeAnn Lee, Donghwan Lee
Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling
Arman Adibi, Nicolo Dal Fabbro, Luca Schenato, Sanjeev Kulkarni, H. Vincent Poor, George J. Pappas, Hamed Hassani, Aritra Mitra