Positive Unlabeled

Positive-unlabeled (PU) learning tackles the challenge of training classifiers using datasets containing only positively labeled and unlabeled examples, lacking explicitly labeled negative instances. Current research focuses on improving classifier performance, particularly in scenarios with imbalanced data or complex data structures like graphs, employing techniques such as asymmetric loss functions, self-supervised learning, and graph-aware algorithms. These advancements are crucial for real-world applications where obtaining negative labels is expensive or impractical, such as in medical diagnosis or anomaly detection, improving the efficiency and accuracy of models trained on incompletely labeled data.

Papers