Safety Critical Application

Safety-critical applications demand high reliability and trustworthiness from the underlying systems, particularly those employing machine learning models like deep neural networks and large language models. Current research focuses on developing robust evaluation frameworks, improving model safety through techniques such as inherently safe design, run-time error detection, and objective suppression, and establishing certification processes for these models. This work is crucial for ensuring the safe deployment of AI in high-stakes domains like autonomous vehicles and healthcare, driving advancements in both theoretical understanding and practical implementation of reliable AI systems.

Papers