Paper ID: 2409.14557 • Published Sep 22, 2024
Exploiting Exogenous Structure for Sample-Efficient Reinforcement Learning
Jia Wan, Sean R. Sinclair, Devavrat Shah, Martin J. Wainwright
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We study Exo-MDPs, a structured class of Markov Decision Processes (MDPs)
where the state space is partitioned into exogenous and endogenous components.
Exogenous states evolve stochastically, independent of the agent's actions,
while endogenous states evolve deterministically based on both state components
and actions. Exo-MDPs are useful for applications including inventory control,
portfolio management, and ride-sharing. Our first result is structural,
establishing a representational equivalence between the classes of discrete
MDPs, Exo-MDPs, and discrete linear mixture MDPs. Specifically, any discrete
MDP can be represented as an Exo-MDP, and the transition and reward dynamics
can be written as linear functions of the exogenous state distribution, showing
that Exo-MDPs are instances of linear mixture MDPs. For unobserved exogenous
states, we prove a regret upper bound of O(H3/2d\sqrt{K}) over K
trajectories of horizon H, with d as the size of the exogenous state space,
and establish nearly-matching lower bounds. Our findings demonstrate how
Exo-MDPs decouple sample complexity from action and endogenous state sizes, and
we validate our theoretical insights with experiments on inventory control.