Garment Manipulation
Garment manipulation research focuses on enabling robots to handle clothes effectively, mirroring the dexterity of humans. Current efforts concentrate on developing robust perception systems, often employing vision-language models and keypoint detection to understand garment structure and state, and learning manipulation strategies through diverse datasets and simulation, including approaches using radiance fields and dense visual correspondences. This field is crucial for advancing robotics in areas like assistive technology and home automation, with recent work emphasizing general-purpose solutions applicable across various garment types and manipulation tasks.
Papers
Flat'n'Fold: A Diverse Multi-Modal Dataset for Garment Perception and Manipulation
Lipeng Zhuang, Shiyu Fan, Yingdong Ru, Florent Audonnet, Paul Henderson, Gerardo Aragon-Camarasa
SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation
Xin Li, Siyuan Huang, Qiaojun Yu, Zhengkai Jiang, Ce Hao, Yimeng Zhu, Hongsheng Li, Peng Gao, Cewu Lu