Offline Imitation
Offline imitation learning aims to train agents to mimic expert behavior using only pre-recorded demonstrations, without further environment interaction. Current research focuses on addressing challenges like limited and potentially suboptimal data by employing techniques such as weighted behavioral cloning, model-based approaches (including reverse augmentation and world models), and optimal transport methods to align agent and expert trajectories. These advancements are significant because they enable learning complex behaviors from limited data, with applications ranging from robotics and autonomous systems to sports analytics and personalized medicine.
Papers
Minimax Optimal Online Imitation Learning via Replay Estimation
Gokul Swamy, Nived Rajaraman, Matthew Peng, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu, Jiantao Jiao, Kannan Ramchandran
Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning
Maximilian Du, Olivia Y. Lee, Suraj Nair, Chelsea Finn