Paper ID: 2403.06681
Out-of-distribution Partial Label Learning
Jintao Huang, Yiu-Ming Cheung, Chi-Man Vong
Partial Label Learning (PLL) tackles model learning from the data with inexact labels under the assumption that training and test objects are in the same distribution, i.e., closed-set scenario. Nevertheless, this assumption does not hold in real-world open-set scenarios where test data may come from Out-Of-Distribution (OOD), resulting in object detection failure and hence significantly compromising the PLL model's security and trustworthiness. This is a previously unexplored problem called Out-Of-Distribution Partial Label Learning (OODPLL) that our newly proposed PLOOD framework can effectively resolve. During the training phase, our framework leverages self-supervised learning strategy to generate positive and negative samples for each object, emulating in and out-of-distributions respectively. Under these distributions, PLL methods can learn discriminative features for OOD objects. In the inference phase, a novel Partial Energy (PE) scoring technique is proposed which leverages the label confidence established during the above training phase to mine the actual labels. In this way, the issue of inexact labeling in PLL can be effectively addressed for significantly better performance in OOD object detection. PLOOD is compared with SOTA PLL models and OOD scores on CIFAR-10 and CIFAR-100 datasets against various OOD datasets. The results demonstrate the effectiveness of our PLOOD framework, significantly outperforming SOTA PLL models and marking a substantial advancement in addressing PLL problems in real-world OOD scenarios.
Submitted: Mar 11, 2024