Imitation Learning
Imitation learning aims to train agents to mimic expert behavior by learning from observational data, primarily focusing on efficiently transferring complex skills from humans or other advanced controllers to robots. Current research emphasizes improving data efficiency through techniques like active learning, data augmentation, and leveraging large language models to provide richer context and handle failures. This field is crucial for advancing robotics, autonomous driving, and other areas requiring complex control policies, as it offers a more data-driven and potentially less labor-intensive approach than traditional programming methods.
Papers
Learning and Retrieval from Prior Data for Skill-based Imitation Learning
Soroush Nasiriany, Tian Gao, Ajay Mandlekar, Yuke Zhu
VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors
Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu
NIFT: Neural Interaction Field and Template for Object Manipulation
Zeyu Huang, Juzhan Xu, Sisi Dai, Kai Xu, Hao Zhang, Hui Huang, Ruizhen Hu
Output Feedback Tube MPC-Guided Data Augmentation for Robust, Efficient Sensorimotor Policy Learning
Andrea Tagliabue, Jonathan P. How
Planning for Sample Efficient Imitation Learning
Zhao-Heng Yin, Weirui Ye, Qifeng Chen, Yang Gao
Hierarchical Model-Based Imitation Learning for Planning in Autonomous Driving
Eli Bronstein, Mark Palatucci, Dominik Notz, Brandyn White, Alex Kuefler, Yiren Lu, Supratik Paul, Payam Nikdel, Paul Mougin, Hongge Chen, Justin Fu, Austin Abrams, Punit Shah, Evan Racah, Benjamin Frenkel, Shimon Whiteson, Dragomir Anguelov
CNT (Conditioning on Noisy Targets): A new Algorithm for Leveraging Top-Down Feedback
Alexia Jolicoeur-Martineau, Alex Lamb, Vikas Verma, Aniket Didolkar
Eliciting Compatible Demonstrations for Multi-Human Imitation Learning
Kanishk Gandhi, Siddharth Karamcheti, Madeline Liao, Dorsa Sadigh
Model-Based Imitation Learning for Urban Driving
Anthony Hu, Gianluca Corrado, Nicolas Griffiths, Zak Murez, Corina Gurau, Hudson Yeo, Alex Kendall, Roberto Cipolla, Jamie Shotton
Iterative Document-level Information Extraction via Imitation Learning
Yunmo Chen, William Gantt, Weiwei Gu, Tongfei Chen, Aaron Steven White, Benjamin Van Durme
Real World Offline Reinforcement Learning with Realistic Data Source
Gaoyue Zhou, Liyiming Ke, Siddhartha Srinivasa, Abhinav Gupta, Aravind Rajeswaran, Vikash Kumar
Travel the Same Path: A Novel TSP Solving Strategy
Pingbang Hu
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning
Aishwarya Kamath, Peter Anderson, Su Wang, Jing Yu Koh, Alexander Ku, Austin Waters, Yinfei Yang, Jason Baldridge, Zarana Parekh
VIMA: General Robot Manipulation with Multimodal Prompts
Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan