Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
A Comparison of Self-Supervised Pretraining Approaches for Predicting Disease Risk from Chest Radiograph Images
Yanru Chen, Michael T Lu, Vineet K Raghu
Knowledge Assembly: Semi-Supervised Multi-Task Learning from Multiple Datasets with Disjoint Labels
Federica Spinola, Philipp Benz, Minhyeong Yu, Tae-hoon Kim
Rank-Aware Negative Training for Semi-Supervised Text Classification
Ahmed Murtadha, Shengfeng Pan, Wen Bo, Jianlin Su, Xinxin Cao, Wenze Zhang, Yunfeng Liu
Semi-supervised learning made simple with self-supervised clustering
Enrico Fini, Pietro Astolfi, Karteek Alahari, Xavier Alameda-Pineda, Julien Mairal, Moin Nabi, Elisa Ricci
Persistent Laplacian-enhanced Algorithm for Scarcely Labeled Data Classification
Gokul Bhusal, Ekaterina Merkurjev, Guo-Wei Wei
Cross-supervised Dual Classifiers for Semi-supervised Medical Image Segmentation
Zhenxi Zhang, Ran Ran, Chunna Tian, Heng Zhou, Fan Yang, Xin Li, Zhicheng Jiao
Jointprop: Joint Semi-supervised Learning for Entity and Relation Extraction with Heterogeneous Graph-based Propagation
Yandan Zheng, Anran Hao, Anh Tuan Luu