3D Pre Training
3D pre-training aims to leverage the success of large-scale 2D pre-trained models to improve the performance of 3D perception tasks, addressing the limitations of smaller, less diverse 3D datasets. Current research focuses on developing efficient pre-training frameworks using various techniques, including masked autoencoders, contrastive learning, and knowledge distillation from 2D models, often incorporating transformer architectures and neural radiance fields (NeRFs). These advancements are significant because they enable more accurate and efficient 3D scene understanding, impacting applications such as autonomous driving, medical image analysis, and 3D object manipulation.
Papers
November 2, 2024
August 30, 2024
August 19, 2024
August 16, 2024
July 18, 2024
July 10, 2024
May 23, 2024
May 12, 2024
May 9, 2024
April 18, 2024
April 1, 2024
February 22, 2024
October 12, 2023
October 4, 2023
August 17, 2023
June 13, 2023
March 24, 2023
March 20, 2023
March 2, 2023