Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to train agents using pre-collected data, eliminating the need for costly and potentially risky online interactions with the environment. Current research focuses on addressing challenges like distributional shift (mismatch between training and target data) and improving generalization across diverse tasks, employing model architectures such as transformers, convolutional networks, and diffusion models, along with algorithms like conservative Q-learning and decision transformers. These advancements are significant for deploying RL in real-world applications where online learning is impractical or unsafe, impacting fields ranging from robotics and healthcare to personalized recommendations and autonomous systems.
Papers
Optimistic Critic Reconstruction and Constrained Fine-Tuning for General Offline-to-Online RL
Qin-Wen Luo, Ming-Kun Xie, Ye-Wen Wang, Sheng-Jun Huang
Robustness Evaluation of Offline Reinforcement Learning for Robot Control Against Action Perturbations
Shingo Ayabe, Takuto Otomo, Hiroshi Kera, Kazuhiko Kawamoto
Preserving Expert-Level Privacy in Offline Reinforcement Learning
Navodita Sharma, Vishnu Vinod, Abhradeep Thakurta, Alekh Agarwal, Borja Balle, Christoph Dann, Aravindan Raghuveer
Enhancing Decision Transformer with Diffusion-Based Trajectory Branch Generation
Zhihong Liu, Long Qian, Zeyang Liu, Lipeng Wan, Xingyu Chen, Xuguang Lan