Paper ID: 2407.11658

Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion

Henri-Jacques Geiß, Firas Al-Hafez, Andre Seyfarth, Jan Peters, Davide Tateo

Learning a locomotion controller for a musculoskeletal system is challenging due to over-actuation and high-dimensional action space. While many reinforcement learning methods attempt to address this issue, they often struggle to learn human-like gaits because of the complexity involved in engineering an effective reward function. In this paper, we demonstrate that adversarial imitation learning can address this issue by analyzing key problems and providing solutions using both current literature and novel techniques. We validate our methodology by learning walking and running gaits on a simulated humanoid model with 16 degrees of freedom and 92 Muscle-Tendon Units, achieving natural-looking gaits with only a few demonstrations.

Submitted: Jul 16, 2024