Paper ID: 2403.04205

OGMP: Oracle Guided Multi-mode Policies for Agile and Versatile Robot Control

Lokesh Krishna, Nikhil Sobanbabu, Quan Nguyen

The efficacy of reinforcement learning for robot control relies on the tailored integration of task-specific priors and heuristics for effective exploration, which challenges their straightforward application to complex tasks and necessitates a unified approach. In this work, we define a general class for priors called oracles that generate state references when queried in a closed-loop manner during training. By bounding the permissible state around the oracle's ansatz, we propose a task-agnostic oracle-guided policy optimization. To enhance modularity, we introduce task-vital modes, showing that a policy mastering a compact set of modes and transitions can handle infinite-horizon tasks. For instance, to perform parkour on an infinitely long track, the policy must learn to jump, leap, pace, and transition between these modes effectively. We validate this approach in challenging bipedal control tasks: parkour and diving using a 16 DoF dynamic bipedal robot, HECTOR. Our method results in a single policy per task, solving parkour across diverse tracks and omnidirectional diving from varied heights up to 2m in simulation, showcasing versatile agility. We demonstrate successful sim-to-real transfer of parkour, including leaping over gaps up to 105 % of the leg length, jumping over blocks up to 20 % of the robot's nominal height, and pacing at speeds of up to 0.6 m/s, along with effective transitions between these modes in the real robot.

Submitted: Mar 7, 2024