Safety Guarantee
Safety guarantees in AI and robotics aim to ensure systems operate reliably and avoid harmful actions, a critical concern for deploying autonomous systems in real-world settings. Current research focuses on developing and verifying safety mechanisms using diverse approaches, including control barrier functions, reinforcement learning with constraints, and formal verification techniques like reachability analysis, often incorporating neural networks or Gaussian processes for modeling and prediction. These advancements are crucial for building trust and enabling wider adoption of AI-powered systems in safety-critical applications such as healthcare, autonomous driving, and robotics.
Papers
Provably Safe Reinforcement Learning via Action Projection using Reachability Analysis and Polynomial Zonotopes
Niklas Kochdumper, Hanna Krasowski, Xiao Wang, Stanley Bak, Matthias Althoff
Safe Planning in Dynamic Environments using Conformal Prediction
Lars Lindemann, Matthew Cleaveland, Gihyun Shim, George J. Pappas