3D Pre Training
3D pre-training aims to leverage the success of large-scale 2D pre-trained models to improve the performance of 3D perception tasks, addressing the limitations of smaller, less diverse 3D datasets. Current research focuses on developing efficient pre-training frameworks using various techniques, including masked autoencoders, contrastive learning, and knowledge distillation from 2D models, often incorporating transformer architectures and neural radiance fields (NeRFs). These advancements are significant because they enable more accurate and efficient 3D scene understanding, impacting applications such as autonomous driving, medical image analysis, and 3D object manipulation.
Papers
TREND: Unsupervised 3D Representation Learning via Temporal Forecasting for LiDAR Perception
Runjian Chen, Hyoungseob Park, Bo Zhang, Wenqi Shao, Ping Luo, Alex Wong
3D Interaction Geometric Pre-training for Molecular Relational Learning
Namkyeong Lee, Yunhak Oh, Heewoong Noh, Gyoung S. Na, Minkai Xu, Hanchen Wang, Tianfan Fu, Chanyoung Park