Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Automated Coronary Calcium Scoring using U-Net Models through Semi-supervised Learning on Non-Gated CT Scans
Sanskriti Singh
EnergyMatch: Energy-based Pseudo-Labeling for Semi-Supervised Learning
Zhuoran Yu, Yin Li, Yong Jae Lee
Confident Sinkhorn Allocation for Pseudo-Labeling
Vu Nguyen, Hisham Husain, Sachin Farfade, Anton van den Hengel
ACT: Semi-supervised Domain-adaptive Medical Image Segmentation with Asymmetric Co-training
Xiaofeng Liu, Fangxu Xing, Nadya Shusharina, Ruth Lim, C-C Jay Kuo, Georges El Fakhri, Jonghye Woo
Semi-Supervised Learning for Mars Imagery Classification and Segmentation
Wenjing Wang, Lilang Lin, Zejia Fan, Jiaying Liu
Semi-Supervised Learning for Image Classification using Compact Networks in the BioMedical Context
Adrián Inés, Andrés Díaz-Pinto, César Domínguez, Jónathan Heras, Eloy Mata, Vico Pascual
A Topological Approach for Semi-Supervised Learning
Adrián Inés, César Domínguez, Jónathan Heras, Gadea Mata, Julio Rubio