Paper ID: 2405.20467
Performance of NPG in Countable State-Space Average-Cost RL
Yashaswini Murthy, Isaac Grosof, Siva Theja Maguluri, R. Srikant
We consider policy optimization methods in reinforcement learning settings where the state space is arbitrarily large, or even countably infinite. The motivation arises from control problems in communication networks, matching markets, and other queueing systems. Specifically, we consider the popular Natural Policy Gradient (NPG) algorithm, which has been studied in the past only under the assumption that the cost is bounded and the state space is finite, neither of which holds for the aforementioned control problems. Assuming a Lyapunov drift condition, which is naturally satisfied in some cases and can be satisfied in other cases at a small cost in performance, we design a state-dependent step-size rule which dramatically improves the performance of NPG for our intended applications. In addition to experimentally verifying the performance improvement, we also theoretically show that the iteration complexity of NPG can be made independent of the size of the state space. The key analytical tool we use is the connection between NPG step-sizes and the solution to Poisson's equation. In particular, we provide policy-independent bounds on the solution to Poisson's equation, which are then used to guide the choice of NPG step-sizes.
Submitted: May 30, 2024