Safety Margin
Safety margins quantify the robustness of a system or model against deviations from expected behavior, aiming to prevent failures or undesirable outcomes. Current research focuses on developing methods to estimate and interpret these margins across diverse applications, including reinforcement learning agents, neural networks, and autonomous vehicles, employing techniques like counterfactual simulations and probabilistic modeling with Gaussian processes. This work is crucial for enhancing the reliability and safety of autonomous systems, improving risk assessment, and enabling more effective human oversight in critical situations.
Papers
November 13, 2024
September 26, 2024
September 25, 2024
June 12, 2024
March 27, 2024
March 4, 2024
August 2, 2023
July 25, 2023
December 22, 2021