Safety Case
A safety case is a structured argument demonstrating the safety of a system, crucial for deploying complex technologies like autonomous vehicles and AI-powered medical systems. Current research focuses on developing frameworks and methods for constructing robust safety cases, particularly for systems incorporating machine learning, addressing challenges like the "hallucination" problem in large language models. This work is vital for ensuring the responsible development and deployment of advanced technologies across various high-stakes domains, promoting both public trust and adherence to safety regulations.
Papers
October 29, 2024
July 1, 2024
April 8, 2024
March 15, 2024
December 9, 2023
October 3, 2023
November 8, 2022
April 16, 2022