Safety Guarantee
Safety guarantees in AI and robotics aim to ensure systems operate reliably and avoid harmful actions, a critical concern for deploying autonomous systems in real-world settings. Current research focuses on developing and verifying safety mechanisms using diverse approaches, including control barrier functions, reinforcement learning with constraints, and formal verification techniques like reachability analysis, often incorporating neural networks or Gaussian processes for modeling and prediction. These advancements are crucial for building trust and enabling wider adoption of AI-powered systems in safety-critical applications such as healthcare, autonomous driving, and robotics.
Papers
Safety-Aware Preference-Based Learning for Safety-Critical Control
Ryan K. Cosner, Maegan Tucker, Andrew J. Taylor, Kejun Li, Tamás G. Molnár, Wyatt Ubellacker, Anil Alan, Gábor Orosz, Yisong Yue, Aaron D. Ames
Safety-Critical Control with Input Delay in Dynamic Environment
Tamas G. Molnar, Adam K. Kiss, Aaron D. Ames, Gábor Orosz