Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Neural Data-to-Text Generation Based on Small Datasets: Comparing the Added Value of Two Semi-Supervised Learning Approaches on Top of a Large Language Model
Chris van der Lee, Thiago Castro Ferreira, Chris Emmery, Travis Wiltshire, Emiel Krahmer
Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning
Xingping Dong, Tianran Ouyang, Shengcai Liao, Bo Du, Ling Shao
Semi-supervised cross-lingual speech emotion recognition
Mirko Agarla, Simone Bianco, Luigi Celona, Paolo Napoletano, Alexey Petrovsky, Flavio Piccoli, Raimondo Schettini, Ivan Shanin
Towards Realistic Semi-Supervised Learning
Mamshad Nayeem Rizve, Navid Kardan, Mubarak Shah
OpenLDN: Learning to Discover Novel Classes for Open-World Semi-Supervised Learning
Mamshad Nayeem Rizve, Navid Kardan, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah
A Safe Semi-supervised Graph Convolution Network
Zhi Yang, Yadong Yan, Haitao Gan, Jing Zhao, Zhiwei Ye