3D Backbone
3D backbone research focuses on developing efficient and effective neural network architectures for processing three-dimensional point cloud data, aiming to improve the accuracy and speed of tasks like object detection, segmentation, and scene understanding. Current research emphasizes the development of novel architectures, including U-Nets, Swin Transformers, and other transformer-based models, often incorporating techniques like self-supervised learning and knowledge distillation to enhance performance. These advancements are crucial for progress in various fields, including autonomous driving, robotics, and medical imaging, where accurate and real-time 3D data processing is essential.
Papers
Three Pillars improving Vision Foundation Model Distillation for Lidar
Gilles Puy, Spyros Gidaris, Alexandre Boulch, Oriane Siméoni, Corentin Sautier, Patrick Pérez, Andrei Bursuc, Renaud Marlet
BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds
Corentin Sautier, Gilles Puy, Alexandre Boulch, Renaud Marlet, Vincent Lepetit