Semi Supervised Training
Semi-supervised training aims to leverage both labeled and unlabeled data to train machine learning models, reducing the reliance on expensive and time-consuming data annotation. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like self-training, consistency regularization, and generative models (e.g., diffusion models, VAEs) within various architectures such as transformers and graph convolutional networks. This approach is particularly impactful in domains with limited labeled data, such as medical image analysis, speech recognition, and object detection, enabling the development of more accurate and robust models with reduced annotation costs.
Papers
November 30, 2022
October 27, 2022
October 17, 2022
September 29, 2022
September 16, 2022
September 3, 2022
August 29, 2022
July 10, 2022
July 1, 2022
June 29, 2022
April 14, 2022
March 9, 2022
March 3, 2022