Pseudo Supervision
Pseudo-supervision leverages automatically generated labels, often from existing models or data transformations, to train machine learning models in scenarios with limited or no true labels. Current research focuses on improving the quality and reliability of these pseudo-labels, employing techniques like distinctive caption sampling, consistency losses, and label revision, often within the context of specific architectures such as transformers and GANs. This approach addresses the high cost of data annotation, enabling advancements in various fields including image segmentation, anomaly detection, and cross-domain classification, ultimately improving model performance and efficiency.
Papers
July 10, 2024
July 9, 2024
April 3, 2024
January 20, 2024
October 23, 2023
October 21, 2023
June 15, 2023
April 12, 2023
March 23, 2023
March 18, 2023
March 8, 2023
February 9, 2023
November 30, 2022
May 30, 2022
April 14, 2022
March 14, 2022
January 15, 2022