Weakly Supervised Semantic Segmentation
Weakly supervised semantic segmentation (WSSS) aims to train accurate image segmentation models using only image-level labels, significantly reducing the need for expensive pixel-level annotations. Current research focuses on improving the quality of pseudo-labels generated from these weak labels, often employing vision transformers (ViTs) and convolutional neural networks (CNNs) in conjunction with techniques like contrastive learning, domain adaptation, and multi-modal approaches (e.g., incorporating text embeddings). Advances in WSSS are crucial for expanding the applicability of semantic segmentation to diverse domains where large, fully annotated datasets are unavailable, impacting fields such as medical image analysis and autonomous driving.
Papers
Beyond Discriminative Regions: Saliency Maps as Alternatives to CAMs for Weakly Supervised Semantic Segmentation
M. Maruf, Arka Daw, Amartya Dutta, Jie Bu, Anuj Karpatne
CVFC: Attention-Based Cross-View Feature Consistency for Weakly Supervised Semantic Segmentation of Pathology Images
Liangrui Pan, Lian Wang, Zhichao Feng, Liwen Xu, Shaoliang Peng