Zero Shot Point Cloud

Zero-shot point cloud understanding aims to enable 3D object recognition and segmentation without requiring training data for the target classes. Current research focuses on leveraging pre-trained vision-language models like CLIP and GPT-4V, adapting them to process point cloud data through techniques such as feature alignment and geometric primitive analysis. These methods improve upon previous approaches by incorporating 3D geometric information and multi-modal data (e.g., images) to enhance zero-shot performance on tasks like classification, segmentation, and registration. This field is significant for advancing label-efficient learning in 3D computer vision, with potential applications in robotics, autonomous driving, and 3D scene understanding.

Papers