Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
BMP: Bridging the Gap between B-Spline and Movement Primitives
Weiran Liao, Ge Li, Hongyi Zhou, Rudolf Lioutikov, Gerhard Neumann
Learning Generalizable 3D Manipulation With 10 Demonstrations
Yu Ren, Yang Cong, Ronghan Chen, Jiahao Long
ALPHA-$α$ and Bi-ACT Are All You Need: Importance of Position and Force Information/Control for Imitation Learning of Unimanual and Bimanual Robotic Manipulation with Low-Cost System
Masato Kobayashi, Thanpimon Buamanee, Takumi Kobayashi
Autonomous Robotic Pepper Harvesting: Imitation Learning in Unstructured Agricultural Environments
Chung Hee Kim, Abhisesh Silwal, George Kantor
Off-Dynamics Reinforcement Learning via Domain Adaptation and Reward Augmented Imitation
Yihong Guo, Yixuan Wang, Yuanyuan Shi, Pan Xu, Anqi Liu
Imitation Learning from Observations: An Autoregressive Mixture of Experts Approach
Renzi Wang, Flavia Sofia Acerbo, Tong Duy Son, Panagiotis Patrinos
EMPERROR: A Flexible Generative Perception Error Model for Probing Self-Driving Planners
Niklas Hanselmann, Simon Doll, Marius Cordts, Hendrik P.A. Lensch, Andreas Geiger
Task-Oriented Hierarchical Object Decomposition for Visuomotor Control
Jianing Qian, Yunshuang Li, Bernadette Bucher, Dinesh Jayaraman
GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation
Haoran Lu, Ruihai Wu, Yitong Li, Sijie Li, Ziyu Zhu, Chuanruo Ning, Yan Shen, Longzan Luo, Yuanpei Chen, Hao Dong