Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
SAM Carries the Burden: A Semi-Supervised Approach Refining Pseudo Labels for Medical Segmentation
Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich
Hypergraph $p$-Laplacian equations for data interpolation and semi-supervised learning
Kehan Shi, Martin Burger
Mediffusion: Joint Diffusion for Self-Explainable Semi-Supervised Classification and Medical Image Generation
Joanna Kaleta, Paweł Skierś, Jan Dubiński, Przemysław Korzeniowski, Kamil Deja
TLDR: Traffic Light Detection using Fourier Domain Adaptation in Hostile WeatheR
Ishaan Gakhar, Aryesh Guha, Aryaman Gupta, Amit Agarwal, Durga Toshniwal, Ujjwal Verma
AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on Pseudo-Label Evaluation
Ran Lingyan, Wen Dongcheng, Zhuo Tao, Zhang Shizhou, Zhang Xiuwei, Zhang Yanning
Integrated Image-Text Based on Semi-supervised Learning for Small Sample Instance Segmentation
Ruting Chi, Zhiyi Huang, Yuexing Han
Leveraging CORAL-Correlation Consistency Network for Semi-Supervised Left Atrium MRI Segmentation
Xinze Li, Runlin Huang, Zhenghao Wu, Bohan Yang, Wentao Fan, Chengzhang Zhu, Weifeng Su