Curiosity Driven Exploration
Curiosity-driven exploration in reinforcement learning aims to design agents that efficiently explore their environment to learn optimal or satisficing behaviors, especially in complex scenarios where exhaustive search is infeasible. Current research focuses on developing novel intrinsic reward functions, such as those based on prediction error, novelty, or the avoidance of redundant exploration, often integrated into hierarchical models or combined with techniques like active inference. These advancements are improving the sample efficiency of reinforcement learning algorithms and enabling better performance in challenging tasks, with applications ranging from robotics and game playing to the automated testing of large language models.
Papers
July 16, 2024
June 10, 2024
April 9, 2024
February 29, 2024
January 8, 2024
December 8, 2023
September 18, 2023
August 30, 2023
June 23, 2023
April 1, 2023
December 27, 2022
October 29, 2022
September 15, 2022
June 16, 2022
November 22, 2021