Reward Engineering
Reward engineering, the process of designing reward functions for reinforcement learning (RL) agents, aims to efficiently guide agents towards desired behaviors. Current research focuses on mitigating the need for extensive manual reward design through methods like preference-based RL, which leverages human feedback or large language models (LLMs) to implicitly define rewards, and techniques that learn reusable or adaptable reward functions across multiple tasks. These advancements are significant because they reduce the substantial human effort and domain expertise traditionally required for effective RL, enabling broader application in robotics, game AI, and other complex domains.
Papers
August 22, 2024
July 11, 2024
June 28, 2024
June 12, 2024
April 30, 2024
April 25, 2024
April 12, 2024
April 3, 2024
April 1, 2024
March 19, 2024
February 6, 2024
November 7, 2023
September 15, 2023
August 24, 2023
August 15, 2023
July 11, 2023
March 24, 2023
March 16, 2023
October 18, 2022
September 16, 2022