Manipulation Task
Robotic manipulation research focuses on enabling robots to perform complex tasks involving object interaction, driven by the need for more adaptable and robust automation. Current efforts center on developing vision-language-action models, often leveraging large language models and deep reinforcement learning, to translate natural language instructions and visual input into precise robot actions, with a strong emphasis on improving robustness and generalization across diverse scenarios and objects. This field is crucial for advancing robotics in various sectors, from manufacturing and logistics to assistive technologies, by creating robots capable of understanding and responding to complex, real-world instructions.
Papers
Play to the Score: Stage-Guided Dynamic Multi-Sensory Fusion for Robotic Manipulation
Ruoxuan Feng, Di Hu, Wenke Ma, Xuelong Li
Jacta: A Versatile Planner for Learning Dexterous and Whole-body Manipulation
Jan Brüdigam, Ali-Adeeb Abbas, Maks Sorokin, Kuan Fang, Brandon Hung, Maya Guru, Stefan Sosnowski, Jiuguang Wang, Sandra Hirche, Simon Le Cleac'h
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic Cane
Shivendra Agrawal, Suresh Nayak, Ashutosh Naik, Bradley Hayes
InterPreT: Interactive Predicate Learning from Language Feedback for Generalizable Task Planning
Muzhi Han, Yifeng Zhu, Song-Chun Zhu, Ying Nian Wu, Yuke Zhu