Semi Supervised Learning
Semi-supervised learning (SSL) aims to improve machine learning model accuracy by leveraging both limited labeled and abundant unlabeled data. Current research focuses on refining pseudo-labeling techniques to reduce noise and bias in unlabeled data, employing teacher-student models and contrastive learning, and developing novel algorithms to effectively utilize all available unlabeled samples, including those from open sets or with imbalanced class distributions. These advancements are significant because they reduce the reliance on expensive and time-consuming manual labeling, thereby expanding the applicability of machine learning to diverse domains with limited annotated data.
Papers
Weakly Supervised Regional and Temporal Learning for Facial Action Unit Recognition
Jingwei Yan, Jingjing Wang, Qiang Li, Chunmao Wang, Shiliang Pu
Dynamic Supervisor for Cross-dataset Object Detection
Ze Chen, Zhihang Fu, Jianqiang Huang, Mingyuan Tao, Shengyu Li, Rongxin Jiang, Xiang Tian, Yaowu Chen, Xian-sheng Hua
Semi-Weakly Supervised Object Detection by Sampling Pseudo Ground-Truth Boxes
Akhil Meethal, Marco Pedersoli, Zhongwen Zhu, Francisco Perdigon Romero, Eric Granger
Semi-Supervised Graph Learning Meets Dimensionality Reduction
Alex Morehead, Watchanan Chantapakul, Jianlin Cheng
Towards Semi-Supervised Deep Facial Expression Recognition with An Adaptive Confidence Margin
Hangyu Li, Nannan Wang, Xi Yang, Xiaoyu Wang, Xinbo Gao
Scale-Equivalent Distillation for Semi-Supervised Object Detection
Qiushan Guo, Yao Mu, Jianyu Chen, Tianqi Wang, Yizhou Yu, Ping Luo
Negative Selection by Clustering for Contrastive Learning in Human Activity Recognition
Jinqiang Wang, Tao Zhu, Liming Chen, Huansheng Ning, Yaping Wan