Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
When CNN Meet with ViT: Towards Semi-Supervised Learning for Multi-Class Medical Image Semantic Segmentation
Ziyang Wang, Tianze Li, Jian-Qing Zheng, Baoru Huang
USB: A Unified Semi-supervised Learning Benchmark for Classification
Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, Renjie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
RDA: Reciprocal Distribution Alignment for Robust Semi-supervised Learning
Yue Duan, Lei Qi, Lei Wang, Luping Zhou, Yinghuan Shi
LAMDA-SSL: Semi-Supervised Learning in Python
Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Yu-Feng Li
Comparison of semi-supervised learning methods for High Content Screening quality control
Umar Masud, Ethan Cohen, Ihab Bendidi, Guillaume Bollot, Auguste Genovesio