Safe Control
Safe control research focuses on designing algorithms and systems that guarantee the safety of autonomous agents, particularly in uncertain or dynamic environments. Current efforts concentrate on integrating machine learning techniques, such as reinforcement learning and Gaussian processes, with formal methods like control barrier functions (CBFs) and model predictive control (MPC) to provide provable safety guarantees. This work is crucial for deploying autonomous systems in safety-critical applications, such as robotics, autonomous driving, and smart energy grids, where failures can have significant consequences. The development of robust, efficient, and verifiable safe control methods is a major focus of ongoing research, bridging the gap between theoretical guarantees and real-world deployment.
Papers
Myopically Verifiable Probabilistic Certificates for Safe Control and Learning
Zhuoyuan Wang, Haoming Jing, Christian Kurniawan, Albert Chern, Yorie Nakahira
Reinforcement Learning with Adaptive Regularization for Safe Control of Critical Systems
Haozhe Tian, Homayoun Hamedmoghadam, Robert Shorten, Pietro Ferraro