Unlabeled Example
Unlabeled example utilization in machine learning focuses on effectively leveraging abundant unlabeled data to improve model performance, particularly in semi-supervised and positive-unlabeled learning scenarios. Current research emphasizes developing robust methods for selecting informative unlabeled examples, often employing techniques like confidence calibration, contrastive learning, and distance-based pseudo-labeling to mitigate the risks of noisy or imbalanced data. These advancements are crucial for reducing reliance on expensive labeled data, improving model accuracy and efficiency across various applications, including image classification, natural language processing, and object detection.
Papers
October 21, 2024
April 18, 2024
February 8, 2024
November 13, 2023
October 17, 2023
October 6, 2023
August 26, 2023
March 15, 2023
February 8, 2023
October 13, 2022