Safety Filter

Safety filters are mechanisms designed to prevent unsafe actions in autonomous systems, ranging from robots to AI models, by modifying or rejecting potentially hazardous commands. Current research focuses on developing robust and minimally invasive filters using diverse approaches, including control barrier functions, reachability analysis, and reinforcement learning, often integrated with generative models like Gaussian splatting or VQ-VAEs for improved efficiency and adaptability. These advancements are crucial for ensuring the safe deployment of increasingly complex autonomous systems across various domains, from robotics and autonomous driving to generative AI, mitigating risks and fostering trust in these technologies.

Papers