Positive Unlabeled Data
Positive-unlabeled (PU) learning tackles the challenge of training classifiers using datasets containing only positively labeled examples and unlabeled data, a common scenario in many applications. Current research focuses on improving classifier performance under various sampling schemes (e.g., case-control vs. single-sample) and addressing biases in the data, employing techniques like asymmetric loss functions, variational autoencoders, and logistic regression approaches. These advancements aim to enhance the accuracy and robustness of PU learning methods, particularly for complex scenarios such as detecting AI-generated text or improving image quality assessment, where obtaining fully labeled datasets is impractical or costly.
Papers
July 14, 2024
May 31, 2024
December 4, 2023
June 5, 2023
May 29, 2023
March 21, 2023
November 10, 2022
September 16, 2022
April 19, 2022