Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Towards Realistic Long-tailed Semi-supervised Learning in an Open World
Yuanpeng He, Lijian Li
SIAVC: Semi-Supervised Framework for Industrial Accident Video Classification
Zuoyong Li, Qinghua Lin, Haoyi Fan, Tiesong Zhao, David Zhang
Smooth Pseudo-Labeling
Nikolaos Karaliolios, Hervé Le Borgne, Florian Chabot
Automatic diagnosis of cardiac magnetic resonance images based on semi-supervised learning
Hejun Huang, Zuguo Chen, Yi Huang, Guangqiang Luo, Chaoyang Chen, Youzhi Song