Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised Learning
Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj
An interpretable imbalanced semi-supervised deep learning framework for improving differential diagnosis of skin diseases
Futian Weng, Yuanting Ma, Jinghan Sun, Shijun Shan, Qiyuan Li, Jianping Zhu, Yang Wang, Yan Xu
Contrastive Credibility Propagation for Reliable Semi-Supervised Learning
Brody Kutt, Pralay Ramteke, Xavier Mignot, Pamela Toman, Nandini Ramanan, Sujit Rokka Chhetri, Shan Huang, Min Du, William Hewlett
NorMatch: Matching Normalizing Flows with Discriminative Classifiers for Semi-Supervised Learning
Zhongying Deng, Rihuan Ke, Carola-Bibiane Schonlieb, Angelica I Aviles-Rivero
You Only Label Once: 3D Box Adaptation from Point Cloud to Image via Semi-Supervised Learning
Jieqi Shi, Peiliang Li, Xiaozhi Chen, Shaojie Shen