Weak Annotation
Weak annotation in machine learning focuses on training models with less expensive and time-consuming annotations than full pixel-wise labeling, using techniques like bounding boxes, sparse points, or even just image-level labels. Current research explores various strategies to leverage these weak annotations effectively, often integrating them with foundation models (like CLIP or SAM), active learning, or self-supervised learning methods to improve model performance. This approach significantly reduces the cost and effort of data annotation, making it crucial for applications with limited resources, such as medical image analysis, and enabling the development of models for tasks previously hindered by annotation bottlenecks.
Papers
The Treasure Beneath Multiple Annotations: An Uncertainty-aware Edge Detector
Caixia Zhou, Yaping Huang, Mengyang Pu, Qingji Guan, Li Huang, Haibin Ling
Full or Weak annotations? An adaptive strategy for budget-constrained annotation campaigns
Javier Gamazo Tejero, Martin S. Zinkernagel, Sebastian Wolf, Raphael Sznitman, Pablo Márquez Neila