Discrete Time
Discrete-time modeling focuses on representing continuous processes as sequences of events at fixed time intervals, enabling analysis and control of dynamical systems using discrete mathematical tools. Current research emphasizes developing robust and efficient algorithms for various applications, including reinforcement learning, optimal control, and system identification, often employing neural networks, stochastic approximation, and techniques like Langevin dynamics and Girsanov-based methods within these frameworks. This approach is crucial for practical applications where continuous-time models are computationally intractable or where data is inherently discrete, impacting fields such as robotics, ecology, and finance through improved model accuracy and control strategies.
Papers
Learning Control Policies for Stochastic Systems with Reach-avoid Guarantees
Đorđe Žikelić, Mathias Lechner, Thomas A. Henzinger, Krishnendu Chatterjee
Learning Provably Stabilizing Neural Controllers for Discrete-Time Stochastic Systems
Matin Ansaripour, Krishnendu Chatterjee, Thomas A. Henzinger, Mathias Lechner, Đorđe Žikelić