Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
Optimizing Crop Management with Reinforcement Learning and Imitation Learning
Ran Tao, Pan Zhao, Jing Wu, Nicolas F. Martin, Matthew T. Harrison, Carla Ferreira, Zahra Kalantari, Naira Hovakimyan
A Joint Imitation-Reinforcement Learning Framework for Reduced Baseline Regret
Sheelabhadra Dey, Sumedh Pendurkar, Guni Sharon, Josiah P. Hanna
Gesture2Path: Imitation Learning for Gesture-aware Navigation
Catie Cuan, Edward Lee, Emre Fisher, Anthony Francis, Leila Takayama, Tingnan Zhang, Alexander Toshev, Sören Pirk
Latent Plans for Task-Agnostic Offline Reinforcement Learning
Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard
Imitrob: Imitation Learning Dataset for Training and Evaluating 6D Object Pose Estimators
Jiri Sedlar, Karla Stepanova, Radoslav Skoviera, Jan K. Behrens, Matus Tuna, Gabriela Sejnova, Josef Sivic, Robert Babuska
Masked Imitation Learning: Discovering Environment-Invariant Modalities in Multimodal Demonstrations
Yilun Hao, Ruinan Wang, Zhangjie Cao, Zihan Wang, Yuchen Cui, Dorsa Sadigh