Optimal Safe Policy
Optimal safe policy research focuses on developing algorithms that learn policies guaranteeing safety while maximizing performance, particularly in high-risk domains. Current approaches leverage techniques like constrained reinforcement learning, model predictive control with chance constraints, and neurosymbolic methods incorporating weakest preconditions or safety monitors to ensure safety during both training and deployment. This field is crucial for enabling the safe application of reinforcement learning in real-world settings such as robotics, autonomous driving, and healthcare, where safety is paramount. The development of provably safe and efficient algorithms is driving progress towards reliable and trustworthy AI systems.
Papers
August 21, 2024
August 25, 2023
October 26, 2022
September 28, 2022
January 20, 2022
December 27, 2021
November 15, 2021