Diffusion Policy
Diffusion policies leverage the power of diffusion models to generate actions for complex tasks, primarily aiming to improve the robustness, efficiency, and generalization capabilities of reinforcement learning agents and robotic controllers. Current research focuses on refining algorithms like Diffusion Policy Policy Optimization (DPPO) and exploring architectures such as Mixture of Experts (MoE) to enhance multi-task learning and reduce computational costs, often incorporating techniques from reinforcement learning and imitation learning. This approach holds significant promise for advancing robotics, particularly in areas like manipulation, locomotion, and navigation, by enabling more adaptable and data-efficient learning of complex behaviors.
Papers
Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for Robot Learning
Yixiao Wang, Yifei Zhang, Mingxiao Huo, Ran Tian, Xiang Zhang, Yichen Xie, Chenfeng Xu, Pengliang Ji, Wei Zhan, Mingyu Ding, Masayoshi Tomizuka
EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning
Jingyun Yang, Zi-ang Cao, Congyue Deng, Rika Antonova, Shuran Song, Jeannette Bohg