Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
599papers
Papers - Page 6
August 10, 2024
A Laplacian-based Quantum Graph Neural Network for Semi-Supervised Learning
Hamed Gholipour, Farid Bozorgnia, Kailash Hambarde, Hamzeh Mohammadigheymasi, Javier Mancilla, Andre Sequeira, Joao Neves, Hugo ProençaInterface Laplace Learning: Learnable Interface Term Helps Semi-Supervised Learning
Tangjun Wang, Chenglong Bao, Zuoqiang Shi
August 8, 2024
August 5, 2024
July 14, 2024
Learning Unlabeled Clients Divergence for Federated Semi-Supervised Learning via Anchor Model Aggregation
Marawan Elbatel, Hualiang Wang, Jixiang Chen, Hao Wang, Xiaomeng LiDefending Against Repetitive Backdoor Attacks on Semi-supervised Learning through Lens of Rate-Distortion-Perception Trade-off
Cheng-Yi Lee, Ching-Chia Kao, Cheng-Han Yeh, Chun-Shien Lu, Chia-Mu Yu, Chu-Song Chen
July 11, 2024