Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
C3DM: Constrained-Context Conditional Diffusion Models for Imitation Learning
Vaibhav Saxena, Yotto Koga, Danfei Xu
Learning Realistic Traffic Agents in Closed-loop
Chris Zhang, James Tu, Lunjun Zhang, Kelvin Wong, Simon Suo, Raquel Urtasun
Multimodal and Force-Matched Imitation Learning with a See-Through Visuotactile Sensor
Trevor Ablett, Oliver Limoyo, Adam Sigal, Affan Jilani, Jonathan Kelly, Kaleem Siddiqi, Francois Hogan, Gregory Dudek
WebWISE: Web Interface Control and Sequential Exploration with Large Language Models
Heyi Tao, Sethuraman T, Michal Shlapentokh-Rothman, Derek Hoiem
Human-in-the-Loop Task and Motion Planning for Imitation Learning
Ajay Mandlekar, Caelan Garrett, Danfei Xu, Dieter Fox
Good Better Best: Self-Motivated Imitation Learning for noisy Demonstrations
Ye Yuan, Xin Li, Yong Heng, Leiji Zhang, MingZhong Wang
LeTFuser: Light-weight End-to-end Transformer-Based Sensor Fusion for Autonomous Driving with Multi-Task Learning
Pedram Agand, Mohammad Mahdavian, Manolis Savva, Mo Chen
CCIL: Continuity-based Data Augmentation for Corrective Imitation Learning
Liyiming Ke, Yunchu Zhang, Abhay Deshpande, Siddhartha Srinivasa, Abhishek Gupta
PUMA: Deep Metric Imitation Learning for Stable Motion Primitives
Rodrigo Pérez-Dattari, Cosimo Della Santina, Jens Kober