Visuomotor Policy
Visuomotor policy learning aims to enable robots to translate visual input directly into actions, achieving complex manipulation tasks. Current research focuses on improving the robustness and generalization of these policies across diverse environments and robot embodiments, often employing diffusion models, transformers, and trajectory optimization methods to handle high-dimensional data and multimodal action spaces. This field is crucial for advancing robotics, enabling more adaptable and efficient robots capable of performing a wider range of tasks in unstructured settings, with significant implications for manufacturing, healthcare, and other domains.
Papers
Vision Language Models are In-Context Value Learners
Yecheng Jason Ma, Joey Hejna, Ayzaan Wahid, Chuyuan Fu, Dhruv Shah, Jacky Liang, Zhuo Xu, Sean Kirmani, Peng Xu, Danny Driess, Ted Xiao, Jonathan Tompson, Osbert Bastani, Dinesh Jayaraman, Wenhao Yu, Tingnan Zhang, Dorsa Sadigh, Fei Xia
Raising Body Ownership in End-to-End Visuomotor Policy Learning via Robot-Centric Pooling
Zheyu Zhuang, Ville Kyrki, Danica Kragic
A Comparative Study on State-Action Spaces for Learning Viewpoint Selection and Manipulation with Diffusion Policy
Xiatao Sun, Francis Fan, Yinxing Chen, Daniel Rakita
Admittance Visuomotor Policy Learning for General-Purpose Contact-Rich Manipulations
Bo Zhou, Ruixuan Jiao, Yi Li, Xiaogang Yuan, Fang Fang, Shihua Li