Safety Assurance
Safety assurance for increasingly complex autonomous systems, particularly those employing machine learning, focuses on mitigating risks stemming from unpredictable behavior, adversarial attacks, and model uncertainty. Current research emphasizes developing robust safety architectures, including input-output filters, safety agents, and hierarchical systems, alongside novel algorithms like Optimistically Safe Online Convex Optimization for handling constraints and uncertainty in dynamic environments. These advancements are crucial for ensuring the reliable and safe deployment of AI agents across various sectors, from autonomous vehicles to industrial robotics, ultimately fostering trust and wider adoption of these technologies.
Papers
September 17, 2024
September 3, 2024
March 9, 2024
February 28, 2024
January 30, 2024
November 10, 2023
August 23, 2023
July 6, 2023
May 5, 2023
March 12, 2023
October 3, 2022
May 3, 2022
March 21, 2022
January 14, 2022
December 22, 2021