Paper ID: 2409.10188
Enhancing RL Safety with Counterfactual LLM Reasoning
Dennis Gross, Helge Spieker
Reinforcement learning (RL) policies may exhibit unsafe behavior and are hard to explain. We use counterfactual large language model reasoning to enhance RL policy safety post-training. We show that our approach improves and helps to explain the RL policy safety.
Submitted: Sep 16, 2024