Safety Violation

Safety violations in complex systems, particularly those involving artificial intelligence (AI) like large language models (LLMs) and autonomous driving systems, are a critical research area focusing on identifying and mitigating vulnerabilities. Current research investigates methods for characterizing and detecting safety violations, including techniques like model checking and fuzzing, often employing formal methods such as linear temporal logic (LTL) and binary decision diagrams (BDDs) for analysis. This work is crucial for ensuring the safe deployment of AI-powered systems in real-world applications, ranging from robotics and industrial automation to autonomous vehicles, where failures can have significant consequences. The development of robust safety verification and validation techniques is paramount to building trust and enabling widespread adoption of these technologies.

Papers