Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
SemiPFL: Personalized Semi-Supervised Federated Learning Framework for Edge Intelligence
Arvin Tashakori, Wenwen Zhang, Z. Jane Wang, Peyman Servati
Pose-MUM : Reinforcing Key Points Relationship for Semi-Supervised Human Pose Estimation
JongMok Kim, Hwijun Lee, Jaeseung Lim, Jongkeun Na, Nojun Kwak, Jin Young Choi
GCT: Graph Co-Training for Semi-Supervised Few-Shot Learning
Rui Xu, Lei Xing, Shuai Shao, Lifei Zhao, Baodi Liu, Weifeng Liu, Yicong Zhou
Don't fear the unlabelled: safe semi-supervised learning via simple debiasing
Hugo Schmutz, Olivier Humbert, Pierre-Alexandre Mattei
Federated Cycling (FedCy): Semi-supervised Federated Learning of Surgical Phases
Hasan Kassem, Deepak Alapatt, Pietro Mascagni, AI4SafeChole Consortium, Alexandros Karargyris, Nicolas Padoy
DS3-Net: Difficulty-perceived Common-to-T1ce Semi-Supervised Multimodal MRI Synthesis Network
Ziqi Huang, Li Lin, Pujin Cheng, Kai Pan, Xiaoying Tang