Safety Guarantee
Safety guarantees in AI and robotics aim to ensure systems operate reliably and avoid harmful actions, a critical concern for deploying autonomous systems in real-world settings. Current research focuses on developing and verifying safety mechanisms using diverse approaches, including control barrier functions, reinforcement learning with constraints, and formal verification techniques like reachability analysis, often incorporating neural networks or Gaussian processes for modeling and prediction. These advancements are crucial for building trust and enabling wider adoption of AI-powered systems in safety-critical applications such as healthcare, autonomous driving, and robotics.
Papers
April 14, 2024
April 6, 2024
April 2, 2024
February 28, 2024
February 19, 2024
February 16, 2024
February 8, 2024
February 7, 2024
February 1, 2024
January 7, 2024
January 3, 2024
December 29, 2023
December 15, 2023
December 14, 2023
December 12, 2023
December 5, 2023
November 29, 2023
November 13, 2023
November 12, 2023