Manipulation Task
Robotic manipulation research focuses on enabling robots to perform complex tasks involving object interaction, driven by the need for more adaptable and robust automation. Current efforts center on developing vision-language-action models, often leveraging large language models and deep reinforcement learning, to translate natural language instructions and visual input into precise robot actions, with a strong emphasis on improving robustness and generalization across diverse scenarios and objects. This field is crucial for advancing robotics in various sectors, from manufacturing and logistics to assistive technologies, by creating robots capable of understanding and responding to complex, real-world instructions.
Papers
Contact-rich SE(3)-Equivariant Robot Manipulation Task Learning via Geometric Impedance Control
Joohwan Seo, Nikhil Potu Surya Prakash, Xiang Zhang, Changhao Wang, Jongeun Choi, Masayoshi Tomizuka, Roberto Horowitz
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa