Safety Guarantee
Safety guarantees in AI and robotics aim to ensure systems operate reliably and avoid harmful actions, a critical concern for deploying autonomous systems in real-world settings. Current research focuses on developing and verifying safety mechanisms using diverse approaches, including control barrier functions, reinforcement learning with constraints, and formal verification techniques like reachability analysis, often incorporating neural networks or Gaussian processes for modeling and prediction. These advancements are crucial for building trust and enabling wider adoption of AI-powered systems in safety-critical applications such as healthcare, autonomous driving, and robotics.
Papers
November 7, 2024
October 23, 2024
October 18, 2024
October 14, 2024
September 25, 2024
September 8, 2024
September 5, 2024
August 9, 2024
July 29, 2024
July 23, 2024
July 4, 2024
June 10, 2024
June 3, 2024
May 28, 2024
May 10, 2024
April 23, 2024
April 14, 2024
April 6, 2024