Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Deep evolving semi-supervised anomaly detection
Jack Belham, Aryan Bhosale, Samrat Mukherjee, Biplab Banerjee, Fabio Cuzzolin
A Semi-Supervised Approach with Error Reflection for Echocardiography Segmentation
Xiaoxiang Han, Yiman Liu, Jiang Shang, Qingli Li, Jiangang Chen, Menghan Hu, Qi Zhang, Yuqi Zhang, Yan Wang
The Last Mile to Supervised Performance: Semi-Supervised Domain Adaptation for Semantic Segmentation
Daniel Morales-Brotons, Grigorios Chrysos, Stratis Tzoumas, Volkan Cevher
Leveraging Semi-Supervised Learning to Enhance Data Mining for Image Classification under Limited Labeled Data
Aoran Shen, Minghao Dai, Jiacheng Hu, Yingbin Liang, Shiru Wang, Junliang Du