Distribution Detection
Out-of-distribution (OOD) detection aims to identify data points that differ significantly from a machine learning model's training data, crucial for ensuring reliable and safe model deployment. Current research focuses on developing novel scoring functions and model architectures, including those based on diffusion models, variational autoencoders, and vision-language models, to improve the accuracy and efficiency of OOD detection, often addressing challenges posed by imbalanced datasets and limited access to model parameters. This field is vital for enhancing the trustworthiness of AI systems across diverse applications, from autonomous driving to medical diagnosis, by mitigating the risks associated with making predictions on unseen data. A growing emphasis is placed on developing methods that are both effective and computationally efficient, particularly for resource-constrained environments.
Papers
Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities
Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
Meta-learning for Out-of-Distribution Detection via Density Estimation in Latent Space
Tomoharu Iwata, Atsutoshi Kumagai
Multiple Testing Framework for Out-of-Distribution Detection
Akshayaa Magesh, Venugopal V. Veeravalli, Anirban Roy, Susmit Jha
Dual Representation Learning for Out-of-Distribution Detection
Zhilin Zhao, Longbing Cao
Out-of-distribution Detection by Cross-class Vicinity Distribution of In-distribution Data
Zhilin Zhao, Longbing Cao, Kun-Yu Lin
Supervision Adaptation Balancing In-distribution Generalization and Out-of-distribution Detection
Zhilin Zhao, Longbing Cao, Kun-Yu Lin