Cross Embodiment
Cross-embodiment research aims to enable robots to learn and perform tasks across diverse physical forms (embodiments), transferring skills learned on one robot to another with different capabilities. Current research focuses on developing algorithms and model architectures, such as transformers and diffusion models, that learn robust representations of actions and environments, facilitating zero-shot transfer to unseen robots. This work is significant because it promises to drastically reduce the cost and time required to train robots for new tasks, paving the way for more adaptable and versatile robotic systems in various applications.
Papers
Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action Models
Xinghang Li, Peiyan Li, Minghuan Liu, Dong Wang, Jirong Liu, Bingyi Kang, Xiao Ma, Tao Kong, Hanbo Zhang, Huaping Liu
RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation
Kun Wu, Chengkai Hou, Jiaming Liu, Zhengping Che, Xiaozhu Ju, Zhuqin Yang, Meng Li, Yinuo Zhao, Zhiyuan Xu, Guang Yang, Zhen Zhao, Guangyu Li, Zhao Jin, Lecheng Wang, Jilei Mao, Xinhua Wang, Shichao Fan, Ning Liu, Pei Ren, Qiang Zhang, Yaoxu Lyu, Mengzhen Liu, Jingyang He, Yulin Luo, Zeyu Gao, Chenxuan Li, Chenyang Gu, Yankai Fu, Di Wu, Xingyu Wang, Sixiang Chen, Zhenyu Wang, Pengju An, Siyuan Qian, Shanghang Zhang, Jian Tang