Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Improving Semi-supervised Deep Learning by using Automatic Thresholding to Deal with Out of Distribution Data for COVID-19 Detection using Chest X-ray Images
Isaac Benavides-Mata, Saul Calderon-Ramirez
Automatic Crater Shape Retrieval using Unsupervised and Semi-Supervised Systems
Atal Tewari, Vikrant Jain, Nitin Khanna
Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive Learning
Luyi Han, Yunzhi Huang, Tao Tan, Ritse Mann
Dual-distribution discrepancy with self-supervised refinement for anomaly detection in medical images
Yu Cai, Hao Chen, Xin Yang, Yu Zhou, Kwang-Ting Cheng