Safe Exploration
Safe exploration in reinforcement learning focuses on enabling agents to learn optimal policies while rigorously avoiding unsafe actions or states during the learning process. Current research emphasizes developing algorithms and model architectures (e.g., model-predictive control, Bayesian methods, and Lagrangian approaches) that incorporate safety constraints, often using techniques like control barrier functions or risk-aware exploration strategies. This field is crucial for deploying reinforcement learning in real-world applications, particularly in robotics and autonomous systems, where safety is paramount and trial-and-error learning could have severe consequences.
Papers
October 23, 2024
October 12, 2024
September 26, 2024
September 18, 2024
September 2, 2024
August 15, 2024
July 8, 2024
June 19, 2024
May 28, 2024
May 10, 2024
May 9, 2024
February 9, 2024
December 18, 2023
December 1, 2023
November 28, 2023
November 3, 2023
October 5, 2023
August 25, 2023
July 31, 2023