Semi Supervised
Semi-supervised learning aims to train machine learning models using both labeled and unlabeled data, addressing the scarcity of labeled data which is a common bottleneck in many applications. Current research focuses on improving the quality of pseudo-labels generated from unlabeled data, often employing techniques like contrastive learning, knowledge distillation, and mean teacher models within various architectures including variational autoencoders, transformers, and graph neural networks. This approach is proving valuable across diverse fields, enhancing model performance in areas such as medical image analysis, object detection, and environmental sound classification where acquiring large labeled datasets is expensive or impractical.
Papers
Accurate and fast anomaly detection in industrial processes and IoT environments
Simone Tonini, Andrea Vandin, Francesca Chiaromonte, Daniele Licari, Fernando Barsacchi
Reliable Student: Addressing Noise in Semi-Supervised 3D Object Detection
Farzad Nozarian, Shashank Agarwal, Farzaneh Rezaeianaran, Danish Shahzad, Atanas Poibrenski, Christian Müller, Philipp Slusallek
Label Propagation Training Schemes for Physics-Informed Neural Networks and Gaussian Processes
Ming Zhong, Dehao Liu, Raymundo Arroyave, Ulisses Braga-Neto
Chinese Sequence Labeling with Semi-Supervised Boundary-Aware Language Model Pre-training
Longhui Zhang, Dingkun Long, Meishan Zhang, Yanzhao Zhang, Pengjun Xie, Min Zhang