Semi Supervised Training
Semi-supervised training aims to leverage both labeled and unlabeled data to train machine learning models, reducing the reliance on expensive and time-consuming data annotation. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like self-training, consistency regularization, and generative models (e.g., diffusion models, VAEs) within various architectures such as transformers and graph convolutional networks. This approach is particularly impactful in domains with limited labeled data, such as medical image analysis, speech recognition, and object detection, enabling the development of more accurate and robust models with reduced annotation costs.
Papers
October 17, 2024
October 15, 2024
September 12, 2024
September 8, 2024
September 4, 2024
July 10, 2024
June 10, 2024
April 30, 2024
April 26, 2024
March 18, 2024
February 27, 2024
December 23, 2023
November 17, 2023
September 28, 2023
August 31, 2023
April 8, 2023
March 9, 2023
February 21, 2023