Human Like Locomotion Behavior
Human-like locomotion behavior in robotics aims to create robots capable of versatile and efficient movement across diverse terrains, mimicking the agility and adaptability of animals. Current research focuses on developing robust control algorithms, often employing reinforcement learning, and exploring various model architectures including neural networks (e.g., transformers, CPG-inspired networks) and reduced-order models (e.g., LIPMs) to achieve this. These advancements are significant for improving robot performance in challenging environments and have implications for applications ranging from search and rescue to exploration and assistive technologies. The field is also exploring bio-inspired designs and control strategies to enhance efficiency and robustness.
Papers
Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion
Henri-Jacques Geiß, Firas Al-Hafez, Andre Seyfarth, Jan Peters, Davide Tateo
RobotKeyframing: Learning Locomotion with High-Level Objectives via Mixture of Dense and Sparse Rewards
Fatemeh Zargarbashi, Jin Cheng, Dongho Kang, Robert Sumner, Stelian Coros
Contact-conditioned learning of locomotion policies
Michal Ciebielski, Majid Khadiv