Paper ID: 2011.02073

Optimal Control-Based Baseline for Guided Exploration in Policy Gradient Methods

Xubo Lyu, Site Li, Seth Siriya, Ye Pu, Mo Chen

In this paper, a novel optimal control-based baseline function is presented for the policy gradient method in deep reinforcement learning (RL). The baseline is obtained by computing the value function of an optimal control problem, which is formed to be closely associated with the RL task. In contrast to the traditional baseline aimed at variance reduction of policy gradient estimates, our work utilizes the optimal control value function to introduce a novel aspect to the role of baseline -- providing guided exploration during policy learning. This aspect is less discussed in prior works. We validate our baseline on robot learning tasks, showing its effectiveness in guided exploration, particularly in sparse reward environments.

Submitted: Nov 4, 2020