Paper ID: 2409.10078
3D-TAFS: A Training-free Framework for 3D Affordance Segmentation
Meng Chu, Xuan Zhang, Zhedong Zheng, Tat-Seng Chua
Translating high-level linguistic instructions into precise robotic actions in the physical world remains challenging, particularly when considering the feasibility of interacting with 3D objects. In this paper, we introduce 3D-TAFS, a novel training-free multimodal framework for 3D affordance segmentation, alongside a benchmark for evaluating interactive language-guided affordance in everyday environments. In particular, our framework integrates a large multimodal model with a specialized 3D vision network, enabling seamless fusion of 2D and 3D visual understanding with language comprehension. To facilitate evaluation, we present a dataset of ten typical indoor environments, each with 50 images annotated for object actions and 3D affordance segmentation. Extensive experiments validate the proposed 3D-TAFS's capability in handling interactive 3D affordance segmentation tasks across diverse settings, showcasing competitive performance across various metrics. Our results highlight 3D-TAFS's potential for enhancing human-robot interaction based on affordance understanding in complex indoor environments, advancing the development of more intuitive and efficient robotic frameworks for real-world applications.
Submitted: Sep 16, 2024