Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Mediffusion: Joint Diffusion for Self-Explainable Semi-Supervised Classification and Medical Image Generation
Joanna Kaleta, Paweł Skierś, Jan Dubiński, Przemysław Korzeniowski, Kamil Deja
TLDR: Traffic Light Detection using Fourier Domain Adaptation in Hostile WeatheR
Ishaan Gakhar, Aryesh Guha, Aryaman Gupta, Amit Agarwal, Durga Toshniwal, Ujjwal Verma
AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on Pseudo-Label Evaluation
Ran Lingyan, Wen Dongcheng, Zhuo Tao, Zhang Shizhou, Zhang Xiuwei, Zhang Yanning
Integrated Image-Text Based on Semi-supervised Learning for Small Sample Instance Segmentation
Ruting Chi, Zhiyi Huang, Yuexing Han
Leveraging CORAL-Correlation Consistency Network for Semi-Supervised Left Atrium MRI Segmentation
Xinze Li, Runlin Huang, Zhenghao Wu, Bohan Yang, Wentao Fan, Chengzhang Zhu, Weifeng Su
Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation
Zehua Cheng, Di Yuan, Thomas Lukasiewicz
SpeGCL: Self-supervised Graph Spectrum Contrastive Learning without Positive Samples
Yuntao Shou, Xiangyong Cao, Deyu Meng
Exploring Semi-Supervised Learning for Online Mapping
Adam Lilja, Erik Wallin, Junsheng Fu, Lars Hammarstrand