Weak Annotation

Weak annotation in machine learning focuses on training models with less expensive and time-consuming annotations than full pixel-wise labeling, using techniques like bounding boxes, sparse points, or even just image-level labels. Current research explores various strategies to leverage these weak annotations effectively, often integrating them with foundation models (like CLIP or SAM), active learning, or self-supervised learning methods to improve model performance. This approach significantly reduces the cost and effort of data annotation, making it crucial for applications with limited resources, such as medical image analysis, and enabling the development of models for tasks previously hindered by annotation bottlenecks.

Papers