Safety Learning
Safety learning in artificial intelligence focuses on developing algorithms that enable autonomous systems to operate safely while achieving desired performance goals. Current research emphasizes integrating robust safety mechanisms, such as Control Barrier Functions (CBFs) and adversarial training methods, with reinforcement learning (RL) frameworks, often incorporating representation learning to improve safety constraint estimation. This work is crucial for deploying autonomous systems in real-world environments, particularly in safety-critical applications like autonomous driving and robotics, where ensuring reliable and safe operation is paramount.
Papers
May 20, 2024
November 28, 2023
September 19, 2023
September 3, 2023
December 6, 2022
December 15, 2021