Paper ID: 2302.09676
Leveraging Prior Knowledge in Reinforcement Learning via Double-Sided Bounds on the Value Function
Jacob Adamczyk, Stas Tiomkin, Rahul Kulkarni
An agent's ability to leverage past experience is critical for efficiently solving new tasks. Approximate solutions for new tasks can be obtained from previously derived value functions, as demonstrated by research on transfer learning, curriculum learning, and compositionality. However, prior work has primarily focused on using value functions to obtain zero-shot approximations for solutions to a new task. In this work, we show how an arbitrary approximation for the value function can be used to derive double-sided bounds on the optimal value function of interest. We further extend the framework with error analysis for continuous state and action spaces. The derived results lead to new approaches for clipping during training which we validate numerically in simple domains.
Submitted: Feb 19, 2023