Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
Learning Strategy Representation for Imitation Learning in Multi-Agent Games
Shiqi Lei, Kanghon Lee, Linjing Li, Jinkyoo Park
RAIL: Reachability-Aided Imitation Learning for Safe Policy Execution
Wonsuhk Jung, Dennis Anthony, Utkarsh A. Mishra, Nadun Ranawaka Arachchige, Matthew Bronars, Danfei Xu, Shreyas Kousik
CANDERE-COACH: Reinforcement Learning from Noisy Feedback
Yuxuan Li, Srijita Das, Matthew E. Taylor
Zero-Cost Whole-Body Teleoperation for Mobile Manipulation
Daniel Honerkamp, Harsh Mahesheka, Jan Ole von Hartz, Tim Welschehold, Abhinav Valada
RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning
Yinpei Dai, Jayjun Lee, Nima Fazeli, Joyce Chai
Work Smarter Not Harder: Simple Imitation Learning with CS-PIBT Outperforms Large Scale Imitation Learning for MAPF
Rishi Veerapaneni, Arthur Jakobsson, Kevin Ren, Samuel Kim, Jiaoyang Li, Maxim Likhachev
Contact Compliance Visuo-Proprioceptive Policy for Contact-Rich Manipulation with Cost-Efficient Haptic Hand-Arm Teleoperation System
Bo Zhou, Ruixuan Jiao, Yi Li, Fang Fang, Fu Chen