Pseudo Supervision

Pseudo-supervision leverages automatically generated labels, often from existing models or data transformations, to train machine learning models in scenarios with limited or no true labels. Current research focuses on improving the quality and reliability of these pseudo-labels, employing techniques like distinctive caption sampling, consistency losses, and label revision, often within the context of specific architectures such as transformers and GANs. This approach addresses the high cost of data annotation, enabling advancements in various fields including image segmentation, anomaly detection, and cross-domain classification, ultimately improving model performance and efficiency.

Papers