Safety Filter
Safety filters are mechanisms designed to prevent unsafe actions in autonomous systems, ranging from robots to AI models, by modifying or rejecting potentially hazardous commands. Current research focuses on developing robust and minimally invasive filters using diverse approaches, including control barrier functions, reachability analysis, and reinforcement learning, often integrated with generative models like Gaussian splatting or VQ-VAEs for improved efficiency and adaptability. These advancements are crucial for ensuring the safe deployment of increasingly complex autonomous systems across various domains, from robotics and autonomous driving to generative AI, mitigating risks and fostering trust in these technologies.
31papers
Papers
February 2, 2025
December 13, 2024
November 29, 2024
October 29, 2024
October 18, 2024
October 15, 2024
Safety Filtering While Training: Improving the Performance and Sample Efficiency of Reinforcement Learning Agents
Federico Pizarro Bejarano, Lukas Brunke, Angela P. SchoelligRPCBF: Constructing Safety Filters Robust to Model Error and Disturbances via Policy Control Barrier Functions
Luzia Knoedler, Oswin So, Ji Yin, Mitchell Black, Zachary Serlin, Panagiotis Tsiotras, Javier Alonso-Mora, Chuchu Fan
September 15, 2024
March 30, 2024