Safety Constraint
Safety constraints in robotics and reinforcement learning focus on ensuring that autonomous systems operate safely while achieving their objectives, addressing the inherent risks of trial-and-error learning. Current research emphasizes developing methods that integrate safety constraints into various control and learning frameworks, including model predictive control (MPC) with control barrier functions (CBFs), Bayesian optimization, and reinforcement learning algorithms enhanced with safety critics or learned constraints. These advancements are crucial for deploying autonomous systems in real-world environments, particularly in safety-critical applications like human-robot collaboration and autonomous driving, where unexpected situations and failures must be mitigated. The ultimate goal is to provide provable safety guarantees while maintaining high performance.
Papers
A Safety Modulator Actor-Critic Method in Model-Free Safe Reinforcement Learning and Application in UAV Hovering
Qihan Qi, Xinsong Yang, Gang Xia, Daniel W. C. Ho, Pengyang Tang
Flipping-based Policy for Chance-Constrained Markov Decision Processes
Xun Shen, Shuo Jiang, Akifumi Wachi, Kaumune Hashimoto, Sebastien Gros
Safe CoR: A Dual-Expert Approach to Integrating Imitation Learning and Safe Reinforcement Learning Using Constraint Rewards
Hyeokjin Kwon, Gunmin Lee, Junseo Lee, Songhwai Oh
Safety-Driven Deep Reinforcement Learning Framework for Cobots: A Sim2Real Approach
Ammar N. Abbas, Shakra Mehak, Georgios C. Chasparis, John D. Kelleher, Michael Guilfoyle, Maria Chiara Leva, Aswin K Ramasubramanian