Safe AI
Safe AI research focuses on developing and deploying artificial intelligence systems that reliably avoid harmful behaviors, prioritizing human safety and ethical considerations. Current efforts concentrate on robust verification methods, including control barrier functions and explainable AI (XAI) techniques, to ensure predictable and trustworthy AI performance across diverse scenarios, particularly in high-stakes domains like healthcare. This field is crucial for mitigating risks associated with increasingly autonomous AI systems, impacting not only the development of safer technologies but also shaping ethical guidelines and regulatory frameworks for AI deployment.
Papers
June 5, 2023
May 30, 2023
April 13, 2023
April 2, 2023
March 2, 2023