Reward Shaping
Reward shaping in reinforcement learning aims to accelerate agent training and improve sample efficiency by modifying the reward function to provide more informative feedback. Current research focuses on developing methods that automatically generate or adapt reward functions, leveraging large language models and incorporating domain knowledge to guide the learning process, often within frameworks like potential-based shaping or by directly shaping Q-values. These advancements are significant because they address the challenge of designing effective reward functions, a critical bottleneck in applying reinforcement learning to complex real-world problems, leading to more efficient and robust agent training across diverse applications.
Papers
October 30, 2022
October 26, 2022
October 24, 2022
October 20, 2022
October 18, 2022
October 4, 2022
September 30, 2022
September 28, 2022
September 25, 2022
September 15, 2022
September 7, 2022
August 9, 2022
July 16, 2022
July 3, 2022
June 20, 2022
June 17, 2022
June 6, 2022
May 20, 2022