Safety Mechanism
Safety mechanisms for AI systems, particularly in autonomous vehicles and generative AI, are a critical research area focused on mitigating risks stemming from model limitations like overconfidence, out-of-domain behavior, and adversarial attacks. Current research explores diverse redundant safety mechanisms, including online out-of-domain detection, robust training methods, and improved safety requirements engineering using LLMs. These efforts aim to ensure reliable and trustworthy AI systems, impacting both the development of safety standards and the safe deployment of AI in high-stakes applications.
Papers
October 16, 2024
March 24, 2024
February 13, 2024
October 23, 2023
October 16, 2023
August 2, 2023
July 12, 2023
July 5, 2023
August 17, 2022