Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Piecewise Planar Hulls for Semi-Supervised Learning of 3D Shape and Pose from 2D Images
Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool
Boosting Semi-Supervised 3D Object Detection with Semi-Sampling
Xiaopei Wu, Yang Zhao, Liang Peng, Hua Chen, Xiaoshui Huang, Binbin Lin, Haifeng Liu, Deng Cai, Wanli Ouyang
Improving Semi-supervised Deep Learning by using Automatic Thresholding to Deal with Out of Distribution Data for COVID-19 Detection using Chest X-ray Images
Isaac Benavides-Mata, Saul Calderon-Ramirez
Analysing the effectiveness of a generative model for semi-supervised medical image segmentation
Margherita Rosnati, Fabio De Sousa Ribeiro, Miguel Monteiro, Daniel Coelho de Castro, Ben Glocker
Bootstrapping the Relationship Between Images and Their Clean and Noisy Labels
Brandon Smart, Gustavo Carneiro
Dual-Curriculum Teacher for Domain-Inconsistent Object Detection in Autonomous Driving
Longhui Yu, Yifan Zhang, Lanqing Hong, Fei Chen, Zhenguo Li
Continuous Pseudo-Labeling from the Start
Dan Berrebbi, Ronan Collobert, Samy Bengio, Navdeep Jaitly, Tatiana Likhomanenko