Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Improving Open-Set Semi-Supervised Learning with Self-Supervision
Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand
When does the student surpass the teacher? Federated Semi-supervised Learning with Teacher-Student EMA
Jessica Zhao, Sayan Ghosh, Akash Bharadwaj, Chih-Yao Ma
Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning
Yawen Cui, Wanxia Deng, Haoyu Chen, Li Liu
What do LLMs Know about Financial Markets? A Case Study on Reddit Market Sentiment Analysis
Xiang Deng, Vasilisa Bashlovkina, Feng Han, Simon Baumgartner, Michael Bendersky
Land Cover and Land Use Detection using Semi-Supervised Learning
Fahmida Tasnim Lisa, Md. Zarif Hossain, Sharmin Naj Mou, Shahriar Ivan, Md. Hasanul Kabir
BTS: Bifold Teacher-Student in Semi-Supervised Learning for Indoor Two-Room Presence Detection Under Time-Varying CSI
Li-Hsiang Shen, Kai-Jui Chen, An-Hung Hsiao, Kai-Ten Feng