Manipulation Task
Robotic manipulation research focuses on enabling robots to perform complex tasks involving object interaction, driven by the need for more adaptable and robust automation. Current efforts center on developing vision-language-action models, often leveraging large language models and deep reinforcement learning, to translate natural language instructions and visual input into precise robot actions, with a strong emphasis on improving robustness and generalization across diverse scenarios and objects. This field is crucial for advancing robotics in various sectors, from manufacturing and logistics to assistive technologies, by creating robots capable of understanding and responding to complex, real-world instructions.
Papers
LADEV: A Language-Driven Testing and Evaluation Platform for Vision-Language-Action Models in Robotic Manipulation
Zhijie Wang, Zhehua Zhou, Jiayang Song, Yuheng Huang, Zhan Shu, Lei Ma
Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation
Paul Jansonnie, Bingbing Wu, Julien Perez, Jan Peters
Automatic Behavior Tree Expansion with LLMs for Robotic Manipulation
Jonathan Styrud, Matteo Iovino, Mikael Norrlöf, Mårten Björkman, Christian Smith
Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models
Hao Cheng, Erjia Xiao, Chengyuan Yu, Zhao Yao, Jiahang Cao, Qiang Zhang, Jiaxu Wang, Mengshu Sun, Kaidi Xu, Jindong Gu, Renjing Xu