Safety Guarantee

Safety guarantees in AI and robotics aim to ensure systems operate reliably and avoid harmful actions, a critical concern for deploying autonomous systems in real-world settings. Current research focuses on developing and verifying safety mechanisms using diverse approaches, including control barrier functions, reinforcement learning with constraints, and formal verification techniques like reachability analysis, often incorporating neural networks or Gaussian processes for modeling and prediction. These advancements are crucial for building trust and enabling wider adoption of AI-powered systems in safety-critical applications such as healthcare, autonomous driving, and robotics.

Papers