Safety Verification
Safety verification focuses on ensuring the reliable and safe operation of autonomous systems, particularly those employing artificial intelligence, such as autonomous vehicles and robots. Current research emphasizes formal methods, including temporal logics and reachability analysis, to verify system behavior against safety constraints, often utilizing model predictive control and neural networks (including invertible architectures) within the verification process. This field is crucial for building trust and enabling the widespread adoption of AI-powered systems in safety-critical applications, driving advancements in both theoretical understanding and practical implementation of robust safety assurance techniques.
Papers
December 1, 2021
November 24, 2021