Robust Imitation
Robust imitation learning aims to train agents that can successfully mimic expert behavior even when faced with variations in the environment, noisy data, or distribution shifts between training and deployment. Current research focuses on developing algorithms that are resilient to these challenges, employing techniques like attention mechanisms, inverse dynamics modeling, and disturbance injection to improve generalization and robustness. This field is crucial for advancing robotics, autonomous systems, and game AI, enabling the efficient transfer of human skills to machines and the creation of more reliable and adaptable intelligent agents.
Papers
One-Shot Robust Imitation Learning for Long-Horizon Visuomotor Tasks from Unsegmented Demonstrations
Shaokang Wu, Yijin Wang, Yanlong Huang
Robust Imitation Learning for Mobile Manipulator Focusing on Task-Related Viewpoints and Regions
Yutaro Ishida, Yuki Noguchi, Takayuki Kanai, Kazuhiro Shintani, Hiroshi Bito