Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning
Khanh-Binh Nguyen
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector
Khanh-Binh Nguyen
Towards Automated Recipe Genre Classification using Semi-Supervised Learning
Nazmus Sakib, G. M. Shahariar, Md. Mohsinul Kabir, Md. Kamrul Hasan, Hasan Mahmud
Learning with Noisy Labels Using Collaborative Sample Selection and Contrastive Semi-Supervised Learning
Qing Miao, Xiaohe Wu, Chao Xu, Yanli Ji, Wangmeng Zuo, Yiwen Guo, Zhaopeng Meng