Paper ID: 2405.13130
Pure Planning to Pure Policies and In Between with a Recursive Tree Planner
A. Norman Redlich
A recursive tree planner (RTP) is designed to function as a pure planner without policies at one extreme and run a pure greedy policy at the other. In between, the RTP exploits policies to improve planning performance and improve zero-shot transfer from one class of planning problem to another. Policies are learned through imitation of the planner. These are then used by the planner to improve policies in a virtuous cycle. To improve planning performance and zero-shot transfer, the RTP incorporates previously learned tasks as generalized actions (GA) at any level of its hierarchy, and can refine those GA by adding primitive actions at any level too. For search, the RTP uses a generalized Dijkstra algorithm [Dijkstra 1959] which tries the greedy policy first and then searches over near-greedy paths and then farther away as necessary. The RPT can return multiple sub-goals from lower levels as well as boundary states near obstacles, and can exploit policies with background and object-number invariance. Policies at all levels of the hierarchy can be learned simultaneously or in any order or come from outside the framework. The RTP is tested here on a variety of Box2d [Cato 2022] problems, including the classic lunar lander [Farama 2022], and on the MuJoCo [Todorov et al 2012] inverted pendulum.
Submitted: May 21, 2024