Safe Exploration
Safe exploration in reinforcement learning focuses on enabling agents to learn optimal policies while rigorously avoiding unsafe actions or states during the learning process. Current research emphasizes developing algorithms and model architectures (e.g., model-predictive control, Bayesian methods, and Lagrangian approaches) that incorporate safety constraints, often using techniques like control barrier functions or risk-aware exploration strategies. This field is crucial for deploying reinforcement learning in real-world applications, particularly in robotics and autonomous systems, where safety is paramount and trial-and-error learning could have severe consequences.
Papers
July 31, 2023
July 10, 2023
June 24, 2023
May 2, 2023
April 23, 2023
April 21, 2023
November 8, 2022
November 3, 2022
October 12, 2022
September 30, 2022
September 28, 2022
September 27, 2022
September 22, 2022
September 10, 2022
August 1, 2022
July 29, 2022
July 8, 2022
June 28, 2022
June 6, 2022