Intrinsic Reward
Intrinsic reward in reinforcement learning aims to enhance exploration and learning efficiency by providing agents with additional, internally generated rewards beyond external task rewards. Current research focuses on improving the design and implementation of intrinsic reward mechanisms, often leveraging prediction-based methods, model-based approaches, and integration with large language models to guide exploration and improve sample efficiency in complex environments. This work is significant because it addresses key limitations of reinforcement learning, particularly in sparse-reward scenarios, leading to more efficient and robust learning algorithms with potential applications in robotics, game playing, and other domains requiring autonomous decision-making.
Papers
Self-Supervised Exploration via Temporal Inconsistency in Reinforcement Learning
Zijian Gao, Kele Xu, Yuanzhao Zhai, Dawei Feng, Bo Ding, XinJun Mao, Huaimin Wang
Dynamic Memory-based Curiosity: A Bootstrap Approach for Exploration
Zijian Gao, YiYing Li, Kele Xu, Yuanzhao Zhai, Dawei Feng, Bo Ding, XinJun Mao, Huaimin Wang
EAGER: Asking and Answering Questions for Automatic Reward Shaping in Language-guided RL
Thomas Carta, Pierre-Yves Oudeyer, Olivier Sigaud, Sylvain Lamprier
MASER: Multi-Agent Reinforcement Learning with Subgoals Generated from Experience Replay Buffer
Jeewon Jeon, Woojun Kim, Whiyoung Jung, Youngchul Sung