Safe AI
Safe AI research focuses on developing and deploying artificial intelligence systems that reliably avoid harmful behaviors, prioritizing human safety and ethical considerations. Current efforts concentrate on robust verification methods, including control barrier functions and explainable AI (XAI) techniques, to ensure predictable and trustworthy AI performance across diverse scenarios, particularly in high-stakes domains like healthcare. This field is crucial for mitigating risks associated with increasingly autonomous AI systems, impacting not only the development of safer technologies but also shaping ethical guidelines and regulatory frameworks for AI deployment.
Papers
October 15, 2024
October 4, 2024
September 30, 2024
September 24, 2024
September 11, 2024
August 23, 2024
July 22, 2024
July 14, 2024
July 12, 2024
July 2, 2024
May 10, 2024
May 3, 2024
April 1, 2024
December 26, 2023
November 24, 2023
November 18, 2023
August 29, 2023
July 27, 2023
July 21, 2023