Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Self-supervised Graphs for Audio Representation Learning with Limited Labeled Data
Amir Shirian, Krishna Somandepalli, Tanaya Guha
Guided Semi-Supervised Non-negative Matrix Factorization on Legal Documents
Pengyu Li, Christine Tseng, Yaxuan Zheng, Joyce A. Chew, Longxiu Huang, Benjamin Jarman, Deanna Needell
FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment
Javier Hernandez-Ortega, Julian Fierrez, Ignacio Serna, Aythami Morales
Robust Semi-supervised Federated Learning for Images Automatic Recognition in Internet of Drones
Zhe Zhang, Shiyao Ma, Zhaohui Yang, Zehui Xiong, Jiawen Kang, Yi Wu, Kejia Zhang, Dusit Niyato