Safety Filter
Safety filters are mechanisms designed to prevent unsafe actions in autonomous systems, ranging from robots to AI models, by modifying or rejecting potentially hazardous commands. Current research focuses on developing robust and minimally invasive filters using diverse approaches, including control barrier functions, reachability analysis, and reinforcement learning, often integrated with generative models like Gaussian splatting or VQ-VAEs for improved efficiency and adaptability. These advancements are crucial for ensuring the safe deployment of increasingly complex autonomous systems across various domains, from robotics and autonomous driving to generative AI, mitigating risks and fostering trust in these technologies.
Papers
September 25, 2023
September 22, 2023
September 16, 2023
September 11, 2023
August 16, 2023
July 31, 2023
July 20, 2023
July 1, 2023
May 20, 2023
March 4, 2023
October 3, 2022
July 11, 2022
July 4, 2022