Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Conflict-Based Cross-View Consistency for Semi-Supervised Semantic Segmentation
Zicheng Wang, Zhen Zhao, Xiaoxia Xing, Dong Xu, Xiangyu Kong, Luping Zhou
Steering Graph Neural Networks with Pinning Control
Acong Zhang, Ping Li, Guanrong Chen
Ego-Vehicle Action Recognition based on Semi-Supervised Contrastive Learning
Chihiro Noguchi, Toshihiro Tanizawa
CRL+: A Novel Semi-Supervised Deep Active Contrastive Representation Learning-Based Text Classification Model for Insurance Data
Amir Namavar Jahromi, Ebrahim Pourjafari, Hadis Karimipour, Amit Satpathy, Lovell Hodge
Multi-site Organ Segmentation with Federated Partial Supervision and Site Adaptation
Pengbo Liu, Mengke Sun, S. Kevin Zhou