Safety Argument

Safety argumentation, crucial for ensuring the safe deployment of increasingly complex systems, particularly those incorporating AI/ML components, focuses on rigorously demonstrating that a system's residual risk is acceptable. Current research emphasizes developing methods for generating and evaluating safety cases, often employing techniques like Goal Structuring Notation (GSN) and integrating knowledge-based and data-driven approaches, including machine learning algorithms for hazard prediction and mitigation. This work is vital for building trust and confidence in autonomous systems and other safety-critical applications across various domains, from healthcare to transportation.

Papers