Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Don't fear the unlabelled: safe semi-supervised learning via simple debiasing
Hugo Schmutz, Olivier Humbert, Pierre-Alexandre Mattei
Federated Cycling (FedCy): Semi-supervised Federated Learning of Surgical Phases
Hasan Kassem, Deepak Alapatt, Pietro Mascagni, AI4SafeChole Consortium, Alexandros Karargyris, Nicolas Padoy
S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised Learning Through Hierarchical Contrastive Learning
Manuel Tran, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
SimMatch: Semi-supervised Learning with Similarity Matching
Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, Chang Xu
Unsupervised Domain Adaptation with Contrastive Learning for OCT Segmentation
Alvaro Gomariz, Huanxiang Lu, Yun Yvonna Li, Thomas Albrecht, Andreas Maunz, Fethallah Benmansour, Alessandra M. Valcarcel, Jennifer Luu, Daniela Ferrara, Orcun Goksel
On the pitfalls of entropy-based uncertainty for multi-class semi-supervised segmentation
Martin Van Waerebeke, Gregory Lodygensky, Jose Dolz