Positive Unlabeled
Positive-unlabeled (PU) learning tackles the challenge of training classifiers using datasets containing only positively labeled and unlabeled examples, lacking explicitly labeled negative instances. Current research focuses on improving classifier performance, particularly in scenarios with imbalanced data or complex data structures like graphs, employing techniques such as asymmetric loss functions, self-supervised learning, and graph-aware algorithms. These advancements are crucial for real-world applications where obtaining negative labels is expensive or impractical, such as in medical diagnosis or anomaly detection, improving the efficiency and accuracy of models trained on incompletely labeled data.
Papers
July 14, 2024
May 31, 2024
October 20, 2023
November 30, 2022
September 6, 2022
June 6, 2022