Curiosity Driven Exploration

Curiosity-driven exploration in reinforcement learning aims to design agents that efficiently explore their environment to learn optimal or satisficing behaviors, especially in complex scenarios where exhaustive search is infeasible. Current research focuses on developing novel intrinsic reward functions, such as those based on prediction error, novelty, or the avoidance of redundant exploration, often integrated into hierarchical models or combined with techniques like active inference. These advancements are improving the sample efficiency of reinforcement learning algorithms and enabling better performance in challenging tasks, with applications ranging from robotics and game playing to the automated testing of large language models.

Papers