Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to train agents using pre-collected data, eliminating the need for costly and potentially risky online interactions with the environment. Current research focuses on addressing challenges like distributional shift (mismatch between training and target data) and improving generalization across diverse tasks, employing model architectures such as transformers, convolutional networks, and diffusion models, along with algorithms like conservative Q-learning and decision transformers. These advancements are significant for deploying RL in real-world applications where online learning is impractical or unsafe, impacting fields ranging from robotics and healthcare to personalized recommendations and autonomous systems.
Papers
CDSA: Conservative Denoising Score-based Algorithm for Offline Reinforcement Learning
Zeyuan Liu, Kai Yang, Xiu Li
Augmenting Offline RL with Unlabeled Data
Zhao Wang, Briti Gangopadhyay, Jia-Fong Yeh, Shingo Takamatsu
Integrating Domain Knowledge for handling Limited Data in Offline RL
Briti Gangopadhyay, Zhao Wang, Jia-Fong Yeh, Shingo Takamatsu
Bayesian Design Principles for Offline-to-Online Reinforcement Learning
Hao Hu, Yiqin Yang, Jianing Ye, Chengjie Wu, Ziqing Mai, Yujing Hu, Tangjie Lv, Changjie Fan, Qianchuan Zhao, Chongjie Zhang
In-Context Decision Transformer: Reinforcement Learning via Hierarchical Chain-of-Thought
Sili Huang, Jifeng Hu, Hechang Chen, Lichao Sun, Bo Yang
Diffusion Actor-Critic: Formulating Constrained Policy Iteration as Diffusion Noise Regression for Offline Reinforcement Learning
Linjiajie Fang, Ruoxue Liu, Jing Zhang, Wenjia Wang, Bing-Yi Jing
Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning
Tenglong Liu, Yang Li, Yixing Lan, Hao Gao, Wei Pan, Xin Xu
Learning from Random Demonstrations: Offline Reinforcement Learning with Importance-Sampled Diffusion Models
Zeyu Fang, Tian Lan
Diffusion Policies creating a Trust Region for Offline Reinforcement Learning
Tianyu Chen, Zhendong Wang, Mingyuan Zhou