Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition
Yash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, Akhil Mathur
Deep Reference Priors: What is the best way to pretrain a model?
Yansong Gao, Rahul Ramesh, Pratik Chaudhari
Semi-supervised 3D Object Detection via Temporal Graph Neural Networks
Jianren Wang, Haiming Gang, Siddharth Ancha, Yi-Ting Chen, David Held
Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation via Semi-supervised Learning and Label Fusion
Han Liu, Yubo Fan, Can Cui, Dingjie Su, Andrew McNeil, Benoit M. Dawant
DebtFree: Minimizing Labeling Cost in Self-Admitted Technical Debt Identification using Semi-Supervised Learning
Huy Tu, Tim Menzies
AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning
Jiwon Kim, Kwangrok Ryoo, Gyuseong Lee, Seokju Cho, Junyoung Seo, Daehwan Kim, Hansang Cho, Seungryong Kim
Semi-Supervised GCN for learning Molecular Structure-Activity Relationships
Alessio Ragno, Dylan Savoia, Roberto Capobianco