Locomotion Skill
Locomotion skill research focuses on enabling robots to move naturally and robustly in diverse environments, primarily through learning-based approaches. Current efforts concentrate on developing controllers that can handle multiple gaits, seamlessly transition between them, and adapt to unpredictable terrain using techniques like reinforcement learning, diffusion models, and adversarial training, often incorporating keyframing or contact-conditioned policies. These advancements are significant for improving robot mobility in real-world applications, and also offer insights into the underlying principles of biological locomotion through the analysis of learned representations and the identification of crucial sensory feedback.
Papers
Safe Learning of Locomotion Skills from MPC
Xun Pua, Majid Khadiv
RobotKeyframing: Learning Locomotion with High-Level Objectives via Mixture of Dense and Sparse Rewards
Fatemeh Zargarbashi, Jin Cheng, Dongho Kang, Robert Sumner, Stelian Coros
Contact-conditioned learning of locomotion policies
Michal Ciebielski, Majid Khadiv
Efficient Learning of Locomotion Skills through the Discovery of Diverse Environmental Trajectory Generator Priors
Shikha Surana, Bryan Lim, Antoine Cully
Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement Learning
Xiaoyu Huang, Zhongyu Li, Yanzhen Xiang, Yiming Ni, Yufeng Chi, Yunhao Li, Lizhi Yang, Xue Bin Peng, Koushil Sreenath