Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Label-invariant Augmentation for Semi-Supervised Graph Classification
Han Yue, Chunhui Zhang, Chuxu Zhang, Hongfu Liu
Semi-WTC: A Practical Semi-supervised Framework for Attack Categorization through Weight-Task Consistency
Zihan Li, Wentao Chen, Zhiqing Wei, Xingqi Luo, Bing Su
Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer
Zhengyuan Liu, Nancy F. Chen
Semi-supervised learning approaches for predicting South African political sentiment for local government elections
Mashadi Ledwaba, Vukosi Marivate
FedMix: Mixed Supervised Federated Learning for Medical Image Segmentation
Jeffry Wicaksana, Zengqiang Yan, Dong Zhang, Xijie Huang, Huimin Wu, Xin Yang, Kwang-Ting Cheng