Paper ID: 2402.18836
A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
Erhan Can Ozcan, Vittorio Giammarino, James Queeney, Ioannis Ch. Paschalidis
This paper investigates how to incorporate expert observations (without explicit information on expert actions) into a deep reinforcement learning setting to improve sample efficiency. First, we formulate an augmented policy loss combining a maximum entropy reinforcement learning objective with a behavioral cloning loss that leverages a forward dynamics model. Then, we propose an algorithm that automatically adjusts the weights of each component in the augmented loss function. Experiments on a variety of continuous control tasks demonstrate that the proposed algorithm outperforms various benchmarks by effectively utilizing available expert observations.
Submitted: Feb 29, 2024