Paper ID: 2112.11648
Out-of-distribution Detection with Boundary Aware Learning
Sen Pei, Xin Zhang, Bin Fan, Gaofeng Meng
There is an increasing need to determine whether inputs are out-of-distribution (\emph{OOD}) for safely deploying machine learning models in the open world scenario. Typical neural classifiers are based on the closed world assumption, where the training data and the test data are drawn \emph{i.i.d.} from the same distribution, and as a result, give over-confident predictions even faced with \emph{OOD} inputs. For tackling this problem, previous studies either use real outliers for training or generate synthetic \emph{OOD} data under strong assumptions, which are either costly or intractable to generalize. In this paper, we propose boundary aware learning (\textbf{BAL}), a novel framework that can learn the distribution of \emph{OOD} features adaptively. The key idea of BAL is to generate \emph{OOD} features from trivial to hard progressively with a generator, meanwhile, a discriminator is trained for distinguishing these synthetic \emph{OOD} features and in-distribution (\emph{ID}) features. Benefiting from the adversarial training scheme, the discriminator can well separate \emph{ID} and \emph{OOD} features, allowing more robust \emph{OOD} detection. The proposed BAL achieves \emph{state-of-the-art} performance on classification benchmarks, reducing up to 13.9\% FPR95 compared with previous methods.
Submitted: Dec 22, 2021