Experience Replay
Experience replay (ER) in machine learning involves storing and reusing past experiences to improve learning efficiency and stability, particularly in reinforcement learning and continual learning scenarios. Current research focuses on optimizing ER strategies, including prioritized sampling based on various metrics (e.g., temporal difference error, novelty, importance), developing efficient memory management techniques (e.g., coreset compression, buffer management), and integrating ER with different model architectures (e.g., graph neural networks, spiking neural networks) to address challenges like catastrophic forgetting and sample inefficiency. These advancements have significant implications for improving the performance and robustness of AI systems across various applications, from robotics and autonomous systems to medical image analysis and drug discovery.
Papers
HiER: Highlight Experience Replay and Easy2Hard Curriculum Learning for Boosting Off-Policy Reinforcement Learning Agents
Dániel Horváth, Jesús Bujalance Martín, Ferenc Gábor Erdős, Zoltán Istenes, Fabien Moutarde
Class-Wise Buffer Management for Incremental Object Detection: An Effective Buffer Training Strategy
Junsu Kim, Sumin Hong, Chanwoo Kim, Jihyeon Kim, Yihalem Yimolal Tiruneh, Jeongwan On, Jihyun Song, Sunhwa Choi, Seungryul Baek