Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to train agents using pre-collected data, eliminating the need for costly and potentially risky online interactions with the environment. Current research focuses on addressing challenges like distributional shift (mismatch between training and target data) and improving generalization across diverse tasks, employing model architectures such as transformers, convolutional networks, and diffusion models, along with algorithms like conservative Q-learning and decision transformers. These advancements are significant for deploying RL in real-world applications where online learning is impractical or unsafe, impacting fields ranging from robotics and healthcare to personalized recommendations and autonomous systems.
Papers
Dual Generator Offline Reinforcement Learning
Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar
Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints
Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine
Behavior Prior Representation learning for Offline Reinforcement Learning
Hongyu Zang, Xin Li, Jie Yu, Chen Liu, Riashat Islam, Remi Tachet Des Combes, Romain Laroche
Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Kuan Fang, Patrick Yin, Ashvin Nair, Homer Walke, Gengchen Yan, Sergey Levine
Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Qinqing Zheng, Mikael Henaff, Brandon Amos, Aditya Grover