3D Semantic Segmentation
3D semantic segmentation aims to assign semantic labels (e.g., car, building, tree) to every point in a 3D scene, enabling detailed scene understanding. Current research focuses on improving accuracy and efficiency, particularly through the use of transformer-based architectures, sparse convolutional networks, and techniques like self-supervised learning and domain adaptation to address data scarcity and variability. This field is crucial for applications such as autonomous driving, robotics, and medical image analysis, where accurate and robust 3D scene understanding is paramount. Advances in 3D semantic segmentation are driving progress in various fields by providing richer, more detailed information about the 3D world.
Papers
AMAES: Augmented Masked Autoencoder Pretraining on Public Brain MRI Data for 3D-Native Segmentation
Asbjørn Munk, Jakob Ambsdorf, Sebastian Llambias, Mads Nielsen
SegStitch: Multidimensional Transformer for Robust and Efficient Medical Imaging Segmentation
Shengbo Tan, Zeyu Zhang, Ying Cai, Daji Ergu, Lin Wu, Binbin Hu, Pengzhang Yu, Yang Zhao