Semantic Segmentation
Semantic segmentation, the task of assigning a semantic label to each pixel in an image, aims to achieve precise pixel-level scene understanding. Current research emphasizes improving accuracy and efficiency across diverse data modalities (RGB, depth, lidar, hyperspectral, and time series) and challenging conditions (low light, adverse weather, imbalanced datasets), often employing advanced architectures like transformers and diffusion models alongside innovative loss functions and training strategies. This field is crucial for numerous applications, including autonomous driving, medical image analysis, remote sensing, and robotics, driving advancements in both model robustness and interpretability.
Papers
Branches Mutual Promotion for End-to-End Weakly Supervised Semantic Segmentation
Lei Zhu, Hangzhou He, Xinliang Zhang, Qian Chen, Shuang Zeng, Qiushi Ren, Yanye Lu
MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation
Kaixin Cai, Pengzhen Ren, Yi Zhu, Hang Xu, Jianzhuang Liu, Changlin Li, Guangrun Wang, Xiaodan Liang
Continual Road-Scene Semantic Segmentation via Feature-Aligned Symmetric Multi-Modal Network
Francesco Barbato, Elena Camuffo, Simone Milani, Pietro Zanuttigh
Syn-Mediverse: A Multimodal Synthetic Dataset for Intelligent Scene Understanding of Healthcare Facilities
Rohit Mohan, José Arce, Sassan Mokhtar, Daniele Cattaneo, Abhinav Valada
Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with Differentiable Expected Calibration Error
Zixin Wang, Yadan Luo, Zhi Chen, Sen Wang, Zi Huang
Learning to Generate Training Datasets for Robust Semantic Segmentation
Marwane Hariat, Olivier Laurent, Rémi Kazmierczak, Shihao Zhang, Andrei Bursuc, Angela Yao, Gianni Franchi
Lowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding
Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, Xiaojuan Qi