Lyapunov Certificate
Lyapunov certificates are learned functions used to rigorously verify the stability and safety of complex dynamical systems, particularly those controlled by deep reinforcement learning (DRL) agents. Current research focuses on developing efficient methods for training these certificates, often employing neural networks, and integrating them into control design frameworks like model predictive control. This work addresses the critical need for verifiable guarantees in safety-critical applications, such as aerospace systems and robotics, where the opaque nature of DRL models poses significant challenges. The resulting advancements promise to enhance the reliability and trustworthiness of autonomous systems.
Papers
July 9, 2024
May 22, 2024
April 11, 2024
April 3, 2024
March 13, 2024