Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Learning the Right Layers: a Data-Driven Layer-Aggregation Strategy for Semi-Supervised Learning on Multilayer Graphs
Sara Venturini, Andrea Cristofari, Francesco Rinaldi, Francesco Tudisco
A rule-general abductive learning by rough sets
Xu-chang Guo, Hou-biao Li
Morphological Classification of Radio Galaxies using Semi-Supervised Group Equivariant CNNs
Mir Sazzat Hossain, Sugandha Roy, K. M. B. Asad, Arshad Momen, Amin Ahsan Ali, M Ashraful Amin, A. K. M. Mahbubur Rahman
Open-world Semi-supervised Novel Class Discovery
Jiaming Liu, Yangqiming Wang, Tongze Zhang, Yulu Fan, Qinli Yang, Junming Shao
Rethinking Semi-supervised Learning with Language Models
Zhengxiang Shi, Francesco Tonolini, Nikolaos Aletras, Emine Yilmaz, Gabriella Kazai, Yunlong Jiao
BMB: Balanced Memory Bank for Imbalanced Semi-supervised Learning
Wujian Peng, Zejia Weng, Hengduo Li, Zuxuan Wu
Label Smarter, Not Harder: CleverLabel for Faster Annotation of Ambiguous Image Classification with Higher Quality
Lars Schmarje, Vasco Grossmann, Tim Michels, Jakob Nazarenus, Monty Santarossa, Claudius Zelenka, Reinhard Koch