Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Optimal Exact Recovery in Semi-Supervised Learning: A Study of Spectral Methods and Graph Convolutional Networks
Hai-Xiao Wang, Zhichao Wang
Learnable Prompting SAM-induced Knowledge Distillation for Semi-supervised Medical Image Segmentation
Kaiwen Huang, Tao Zhou, Huazhu Fu, Yizhe Zhang, Yi Zhou, Chen Gong, Dong Liang
Semi-Supervised Transfer Boosting (SS-TrBoosting)
Lingfei Deng, Changming Zhao, Zhenbang Du, Kun Xia, Dongrui Wu
Biologically-inspired Semi-supervised Semantic Segmentation for Biomedical Imaging
Luca Ciampi, Gabriele Lagani, Giuseppe Amato, Fabrizio Falchi
Benchmarking Attention Mechanisms and Consistency Regularization Semi-Supervised Learning for Post-Flood Building Damage Assessment in Satellite Images
Jiaxi Yu, Tomohiro Fukuda, Nobuyoshi Yabuki
Lightweight Contenders: Navigating Semi-Supervised Text Mining through Peer Collaboration and Self Transcendence
Qianren Mao, Weifeng Jiang, Junnan Liu, Chenghua Lin, Qian Li, Xianqing Wen, Jianxin Li, Jinhu Lu
Deep evolving semi-supervised anomaly detection
Jack Belham, Aryan Bhosale, Samrat Mukherjee, Biplab Banerjee, Fabio Cuzzolin