Reward Shaping
Reward shaping in reinforcement learning aims to accelerate agent training and improve sample efficiency by modifying the reward function to provide more informative feedback. Current research focuses on developing methods that automatically generate or adapt reward functions, leveraging large language models and incorporating domain knowledge to guide the learning process, often within frameworks like potential-based shaping or by directly shaping Q-values. These advancements are significant because they address the challenge of designing effective reward functions, a critical bottleneck in applying reinforcement learning to complex real-world problems, leading to more efficient and robust agent training across diverse applications.
Papers
September 21, 2023
September 20, 2023
August 30, 2023
August 2, 2023
July 19, 2023
July 16, 2023
July 10, 2023
June 20, 2023
May 28, 2023
March 25, 2023
February 28, 2023
February 8, 2023
January 30, 2023
January 26, 2023
January 5, 2023
December 29, 2022
December 6, 2022
December 2, 2022
November 30, 2022