Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
Transferring Foundation Models for Generalizable Robotic Manipulation
Jiange Yang, Wenhui Tan, Chuhao Jin, Keling Yao, Bei Liu, Jianlong Fu, Ruihua Song, Gangshan Wu, Limin Wang
Embodied Executable Policy Learning with Language-based Scene Summarization
Jielin Qiu, Mengdi Xu, William Han, Seungwhan Moon, Ding Zhao
LIV: Language-Image Representations and Rewards for Robotic Control
Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, Dinesh Jayaraman
Causal Imitability Under Context-Specific Independence Relations
Fateme Jamshidi, Sina Akbari, Negar Kiyavash
Efficient Deep Learning of Robust Policies from MPC using Imitation and Tube-Guided Data Augmentation
Andrea Tagliabue, Jonathan P. How