Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Organ localisation using supervised and semi supervised approaches combining reinforcement learning with imitation learning
Sankaran Iyer, Alan Blair, Laughlin Dawes, Daniel Moses, Christopher White, Arcot Sowmya
Clue Me In: Semi-Supervised FGVC with Out-of-Distribution Data
Ruoyi Du, Dongliang Chang, Zhanyu Ma, Yi-Zhe Song, Jun Guo
CoDiM: Learning with Noisy Labels via Contrastive Semi-Supervised Learning
Xin Zhang, Zixuan Liu, Kaiwen Xiao, Tian Shen, Junzhou Huang, Wei Yang, Dimitris Samaras, Xiao Han
Uncertainty-Aware Deep Co-training for Semi-supervised Medical Image Segmentation
Xu Zheng, Chong Fu, Haoyu Xie, Jialei Chen, Xingwei Wang, Chiu-Wing Sham