Safety Constraint
Safety constraints in robotics and reinforcement learning focus on ensuring that autonomous systems operate safely while achieving their objectives, addressing the inherent risks of trial-and-error learning. Current research emphasizes developing methods that integrate safety constraints into various control and learning frameworks, including model predictive control (MPC) with control barrier functions (CBFs), Bayesian optimization, and reinforcement learning algorithms enhanced with safety critics or learned constraints. These advancements are crucial for deploying autonomous systems in real-world environments, particularly in safety-critical applications like human-robot collaboration and autonomous driving, where unexpected situations and failures must be mitigated. The ultimate goal is to provide provable safety guarantees while maintaining high performance.
Papers
Certifiably-correct Control Policies for Safe Learning and Adaptation in Assistive Robotics
Keyvan Majd, Geoffrey Clark, Tanmay Khandait, Siyu Zhou, Sriram Sankaranarayanan, Georgios Fainekos, Heni Ben Amor
Throughput of Freeway Networks under Ramp Metering Subject to Vehicle Safety Constraints
Milad Pooladsanj, Ketan Savla, Petros A. Ioannou