Paper ID: 2501.02652 • Published Jan 5, 2025
A View of the Certainty-Equivalence Method for PAC RL as an Application of the Trajectory Tree Method
Shivaram Kalyanakrishnan, Sheel Shah, Santhosh Kumar Guguloth
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Reinforcement learning (RL) enables an agent interacting with an unknown MDP
M to optimise its behaviour by observing transitions sampled from M. A
natural entity that emerges in the agent's reasoning is \widehat{M}, the
maximum likelihood estimate of M based on the observed transitions. The
well-known \textit{certainty-equivalence} method (CEM) dictates that the agent
update its behaviour to \widehat{π}, which is an optimal policy for
\widehat{M}. Not only is CEM intuitive, it has been shown to enjoy
minimax-optimal sample complexity in some regions of the parameter space for
PAC RL with a generative model~\citep{Agarwal2020GenModel}.
A seemingly unrelated algorithm is the ``trajectory tree method''
(TTM)~\citep{Kearns+MN:1999}, originally developed for efficient decision-time
planning in large POMDPs. This paper presents a theoretical investigation that
stems from the surprising finding that CEM may indeed be viewed as an
application of TTM. The qualitative benefits of this view are (1) new and
simple proofs of sample complexity upper bounds for CEM, in fact under a (2)
weaker assumption on the rewards than is prevalent in the current literature.
Our analysis applies to both non-stationary and stationary MDPs.
Quantitatively, we obtain (3) improvements in the sample-complexity upper
bounds for CEM both for non-stationary and stationary MDPs, in the regime that
the ``mistake probability'' δ is small. Additionally, we show (4) a
lower bound on the sample complexity for finite-horizon MDPs, which establishes
the minimax-optimality of our upper bound for non-stationary MDPs in the
small-δ regime.