Offline Meta Reinforcement Learning
Offline meta-reinforcement learning (OMRL) aims to train agents capable of rapidly adapting to new tasks using only pre-collected data, avoiding costly online interaction. Current research focuses on improving the robustness and generalization of learned task representations, often employing contrastive learning, adversarial data augmentation, or information-theoretic frameworks to disentangle task characteristics from behavior policy biases. These advancements address limitations in data diversity and distribution shifts, leading to more reliable and efficient adaptation in unseen scenarios, with implications for safe and sample-efficient deployment of reinforcement learning agents in real-world applications.
Papers
October 15, 2024
May 20, 2024
March 12, 2024
February 4, 2024
December 26, 2023
December 11, 2023
November 7, 2023
May 31, 2023
April 1, 2023
November 15, 2022
June 27, 2022
June 21, 2022
February 7, 2022
December 7, 2021