Reward Shaping
Reward shaping in reinforcement learning aims to accelerate agent training and improve sample efficiency by modifying the reward function to provide more informative feedback. Current research focuses on developing methods that automatically generate or adapt reward functions, leveraging large language models and incorporating domain knowledge to guide the learning process, often within frameworks like potential-based shaping or by directly shaping Q-values. These advancements are significant because they address the challenge of designing effective reward functions, a critical bottleneck in applying reinforcement learning to complex real-world problems, leading to more efficient and robust agent training across diverse applications.
Papers
May 24, 2024
May 6, 2024
April 23, 2024
April 17, 2024
April 11, 2024
March 21, 2024
March 19, 2024
March 9, 2024
March 3, 2024
February 12, 2024
February 7, 2024
February 1, 2024
January 21, 2024
December 15, 2023
December 5, 2023
November 27, 2023
November 16, 2023
October 29, 2023
October 20, 2023
October 3, 2023