Imbalanced Semi Supervised Learning
Imbalanced semi-supervised learning (SSL) tackles the challenge of training machine learning models with limited labeled data and a skewed class distribution in both labeled and unlabeled datasets. Current research focuses on mitigating the bias introduced by unreliable pseudo-labels generated from the imbalanced data, employing techniques like class-distribution-aware debiasing, balanced contrastive learning, and refined pseudo-label generation strategies. These advancements aim to improve model accuracy and robustness in real-world applications where obtaining sufficient labeled data is costly or impractical, impacting fields like image classification and point cloud segmentation.
Papers
July 7, 2024
March 15, 2024
March 4, 2024
January 13, 2024
December 27, 2023
December 15, 2023
November 3, 2023
May 22, 2023
March 13, 2023
March 6, 2023
November 20, 2022
September 21, 2022
July 28, 2022