Probabilistic Safety
Probabilistic safety focuses on designing systems that remain safe with a high probability, addressing the inherent uncertainties in complex systems like autonomous vehicles and robots. Current research emphasizes developing reinforcement learning (RL) algorithms that balance reward maximization with safety constraints, often employing techniques like control barrier functions, probabilistic model checking, and physics-informed learning to ensure safety guarantees even under uncertainty. This field is crucial for deploying AI in safety-critical applications, enabling the development of reliable and trustworthy autonomous systems across various domains.
Papers
Safe Reinforcement Learning with Probabilistic Guarantees Satisfying Temporal Logic Specifications in Continuous Action Spaces
Hanna Krasowski, Prithvi Akella, Aaron D. Ames, Matthias Althoff
Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Linrui Zhang, Qin Zhang, Li Shen, Bo Yuan, Xueqian Wang, Dacheng Tao