Safety Critical Application
Safety-critical applications demand high reliability and trustworthiness from the underlying systems, particularly those employing machine learning models like deep neural networks and large language models. Current research focuses on developing robust evaluation frameworks, improving model safety through techniques such as inherently safe design, run-time error detection, and objective suppression, and establishing certification processes for these models. This work is crucial for ensuring the safe deployment of AI in high-stakes domains like autonomous vehicles and healthcare, driving advancements in both theoretical understanding and practical implementation of reliable AI systems.
Papers
July 11, 2024
July 1, 2024
June 3, 2024
March 12, 2024
February 23, 2024
February 13, 2024
December 5, 2023
September 28, 2023
November 17, 2022
October 10, 2022
May 1, 2022
February 23, 2022