Based Imitation
Based imitation learning aims to train agents by learning from expert demonstrations, offering a more data-efficient alternative to reinforcement learning. Current research focuses on improving robustness and generalization across diverse tasks and environments, employing techniques like diffusion models, Koopman operators, and contrastive learning within various architectures including transformers and probabilistic programs. This approach is significant for robotics, enabling robots to acquire complex skills from human demonstrations, and for other fields like autonomous driving and human behavior modeling, where it facilitates the creation of more realistic and efficient simulations.
Papers
Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects
Paul Maria Scheikl, Nicolas Schreiber, Christoph Haas, Niklas Freymuth, Gerhard Neumann, Rudolf Lioutikov, Franziska Mathis-Ullrich
Benchmarking the Full-Order Model Optimization Based Imitation in the Humanoid Robot Reinforcement Learning Walk
Ekaterina Chaikovskaya, Inna Minashina, Vladimir Litvinenko, Egor Davydenko, Dmitry Makarov, Yulia Danik, Roman Gorbachev