Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning
Carl Qi, Pieter Abbeel, Aditya Grover
Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale
Ram Ramrakhya, Eric Undersander, Dhruv Batra, Abhishek Das
3D Perception based Imitation Learning under Limited Demonstration for Laparoscope Control in Robotic Surgery
Bin Li, Ruofeng Wei, Jiaqi Xu, Bo Lu, Chi-Hang Yee, Chi-Fai Ng, Pheng-Ann Heng, Qi Dou, Yun-Hui Liu
Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of Demonstrations for Social Navigation
Haresh Karnan, Anirudh Nair, Xuesu Xiao, Garrett Warnell, Soeren Pirk, Alexander Toshev, Justin Hart, Joydeep Biswas, Peter Stone
Modular Adaptive Policy Selection for Multi-Task Imitation Learning through Task Division
Dafni Antotsiou, Carlo Ciliberto, Tae-Kyun Kim