3D Pre Training
3D pre-training aims to leverage the success of large-scale 2D pre-trained models to improve the performance of 3D perception tasks, addressing the limitations of smaller, less diverse 3D datasets. Current research focuses on developing efficient pre-training frameworks using various techniques, including masked autoencoders, contrastive learning, and knowledge distillation from 2D models, often incorporating transformer architectures and neural radiance fields (NeRFs). These advancements are significant because they enable more accurate and efficient 3D scene understanding, impacting applications such as autonomous driving, medical image analysis, and 3D object manipulation.
Papers
December 17, 2022
November 23, 2022
November 15, 2022
July 14, 2022
July 11, 2022
April 16, 2022
January 5, 2022