Paper ID: 2502.18955 • Published Feb 26, 2025
Fewer May Be Better: Enhancing Offline Reinforcement Learning with Reduced Dataset
Yiqin Yang, Quanwei Wang, Chenghao Li, Hao Hu, Chengjie Wu, Yuhua Jiang, Dianyu Zhong, Ziyou Zhang, Qianchuan Zhao...
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences...
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Offline reinforcement learning (RL) represents a significant shift in RL
research, allowing agents to learn from pre-collected datasets without further
interaction with the environment. A key, yet underexplored, challenge in
offline RL is selecting an optimal subset of the offline dataset that enhances
both algorithm performance and training efficiency. Reducing dataset size can
also reveal the minimal data requirements necessary for solving similar
problems. In response to this challenge, we introduce ReDOR (Reduced Datasets
for Offline RL), a method that frames dataset selection as a gradient
approximation optimization problem. We demonstrate that the widely used
actor-critic framework in RL can be reformulated as a submodular optimization
objective, enabling efficient subset selection. To achieve this, we adapt
orthogonal matching pursuit (OMP), incorporating several novel modifications
tailored for offline RL. Our experimental results show that the data subsets
identified by ReDOR not only boost algorithm performance but also do so with
significantly lower computational complexity.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.