Paper ID: 2306.04836
$K$-Nearest-Neighbor Resampling for Off-Policy Evaluation in Stochastic Control
Michael Giegrich, Roel Oomen, Christoph Reisinger
In this paper, we propose a novel $K$-nearest neighbor resampling procedure for estimating the performance of a policy from historical data containing realized episodes of a decision process generated under a different policy. We provide statistical consistency results under weak conditions. In particular, we avoid the common assumption of identically and independently distributed transitions and rewards. Instead, our analysis allows for the sampling of entire episodes, as is common practice in most applications. To establish the consistency in this setting, we generalize Stone's Theorem, a well-known result in nonparametric statistics on local averaging, to include episodic data and the counterfactual estimation underlying off-policy evaluation (OPE). By focusing on feedback policies that depend deterministically on the current state in environments with continuous state-action spaces and system-inherent stochasticity effected by chosen actions, and relying on trajectory simulation similar to Monte Carlo methods, the proposed method is particularly well suited for stochastic control environments. Compared to other OPE methods, our algorithm does not require optimization, can be efficiently implemented via tree-based nearest neighbor search and parallelization, and does not explicitly assume a parametric model for the environment's dynamics. Numerical experiments demonstrate the effectiveness of the algorithm compared to existing baselines in a variety of stochastic control settings, including a linear quadratic regulator, trade execution in limit order books, and online stochastic bin packing.
Submitted: Jun 7, 2023