Unsupervised 3D
Unsupervised 3D learning aims to extract meaningful information from 3D data (like point clouds or images) without relying on manually labeled datasets, significantly reducing the cost and effort of data annotation. Current research focuses on developing novel architectures and algorithms, including contrastive learning, mesh fusion, and leveraging 2D information (images and text) to generate pseudo-labels for training 3D models. These advancements are improving performance in various downstream tasks such as 3D segmentation, pose estimation, and object recognition, impacting fields like autonomous driving and robotics by enabling more robust and scalable 3D perception systems.
Papers
CLAP: Unsupervised 3D Representation Learning for Fusion 3D Perception via Curvature Sampling and Prototype Learning
Runjian Chen, Hang Zhang, Avinash Ravichandran, Wenqi Shao, Alex Wong, Ping Luo
TREND: Unsupervised 3D Representation Learning via Temporal Forecasting for LiDAR Perception
Runjian Chen, Hyoungseob Park, Bo Zhang, Wenqi Shao, Ping Luo, Alex Wong