Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Communication-Efficient Vertical Federated Learning with Limited Overlapping Samples
Jingwei Sun, Ziyue Xu, Dong Yang, Vishwesh Nath, Wenqi Li, Can Zhao, Daguang Xu, Yiran Chen, Holger R. Roth
TimeBalance: Temporally-Invariant and Temporally-Distinctive Video Representations for Semi-Supervised Action Recognition
Ishan Rajendrakumar Dave, Mamshad Nayeem Rizve, Chen Chen, Mubarak Shah
Addressing Class Variable Imbalance in Federated Semi-supervised Learning
Zehui Dong, Wenjing Liu, Siyuan Liu, Xingzhi Chen
On the link between generative semi-supervised learning and generative open-set recognition
Emile Reyn Engelbrecht, Johan du Preez
Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning
Yang Yu, Danruo Deng, Furui Liu, Yueming Jin, Qi Dou, Guangyong Chen, Pheng-Ann Heng