Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Deep Semi-supervised Learning with Double-Contrast of Features and Semantics
Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou
Perturb Initial Features: Generalization of Neural Networks Under Sparse Features for Semi-supervised Node Classification
Yoonhyuk Choi, Jiho Choi, Taewook Ko, Chong-Kwon Kim
Semi-Supervised Confidence-Level-based Contrastive Discrimination for Class-Imbalanced Semantic Segmentation
Kangcheng Liu
Semi-supervised learning for continuous emotional intensity controllable speech synthesis with disentangled representations
Yoori Oh, Juheon Lee, Yoseob Han, Kyogu Lee
Semi-supervised Variational Autoencoder for Regression: Application on Soft Sensors
Yilin Zhuang, Zhuobin Zhou, Burak Alakent, Mehmet Mercangöz