Safety Argument
Safety argumentation, crucial for ensuring the safe deployment of increasingly complex systems, particularly those incorporating AI/ML components, focuses on rigorously demonstrating that a system's residual risk is acceptable. Current research emphasizes developing methods for generating and evaluating safety cases, often employing techniques like Goal Structuring Notation (GSN) and integrating knowledge-based and data-driven approaches, including machine learning algorithms for hazard prediction and mitigation. This work is vital for building trust and confidence in autonomous systems and other safety-critical applications across various domains, from healthcare to transportation.
Papers
October 29, 2024
December 9, 2023
November 13, 2023
October 25, 2022
July 29, 2022
February 10, 2022
January 14, 2022
November 30, 2021