Continual Offline Reinforcement Learning
Continual offline reinforcement learning (CORL) aims to train agents on a sequence of offline datasets, enabling them to learn new tasks without forgetting previously acquired skills. Current research focuses on improving the stability and efficiency of CORL, exploring techniques like generative replay, multi-head architectures (e.g., within Decision Transformers), and carefully designed replay buffers that prioritize informative experiences (e.g., using curiosity-driven selection). These advancements address the challenge of catastrophic forgetting and distribution shift, leading to more robust and efficient continual learning in offline settings, with implications for applications requiring lifelong learning from static data.
Papers
October 21, 2024
October 2, 2024
April 16, 2024
January 16, 2024
December 5, 2023
September 29, 2023
May 23, 2023