Pixel Level Annotation
Pixel-level annotation, the process of labeling each pixel in an image with a specific class, is crucial for training many computer vision models, particularly those for semantic segmentation. Current research focuses on reducing the substantial manual effort required for this task, exploring techniques like weak supervision (using bounding boxes or points instead of full pixel masks), semi-supervised learning (combining labeled and unlabeled data), and active learning (iteratively selecting the most informative pixels to label). These advancements leverage various model architectures, including transformers, convolutional neural networks, and diffusion models, to improve annotation efficiency and accuracy. The resulting improvements in annotation efficiency have significant implications for various fields, enabling the development of more accurate and robust models for applications ranging from medical image analysis to autonomous driving.
Papers
SegDA: Maximum Separable Segment Mask with Pseudo Labels for Domain Adaptive Semantic Segmentation
Anant Khandelwal
Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot Segment Anything Model for Molecular-empowered Learning
Xueyuan Li, Ruining Deng, Yucheng Tang, Shunxing Bao, Haichun Yang, Yuankai Huo