Paper ID: 2207.08210
A Simple Test-Time Method for Out-of-Distribution Detection
Ke Fan, Yikai Wang, Qian Yu, Da Li, Yanwei Fu
Neural networks are known to produce over-confident predictions on input images, even when these images are out-of-distribution (OOD) samples. This limits the applications of neural network models in real-world scenarios, where OOD samples exist. Many existing approaches identify the OOD instances via exploiting various cues, such as finding irregular patterns in the feature space, logits space, gradient space or the raw space of images. In contrast, this paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection. Empirically, we find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks. To be specific, many state-of-the-art OOD algorithms, although designed to measure reliability in different ways, actually lead to OOD scores mostly linearly related to their image features. Thus, by simply learning a linear regression model trained from the paired image features and inferred OOD scores at test-time, we can make a more precise OOD prediction for the test instances. We further propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications. Remarkably, we improve FPR95 from $51.37\%$ to $12.30\%$ on CIFAR-10 datasets with maximum softmax probability as the base OOD detector. Extensive experiments on several benchmark datasets show the efficacy of ETLT for OOD detection task.
Submitted: Jul 17, 2022