Distribution Detection
Out-of-distribution (OOD) detection aims to identify data points that differ significantly from a machine learning model's training data, crucial for ensuring reliable and safe model deployment. Current research focuses on developing novel scoring functions and model architectures, including those based on diffusion models, variational autoencoders, and vision-language models, to improve the accuracy and efficiency of OOD detection, often addressing challenges posed by imbalanced datasets and limited access to model parameters. This field is vital for enhancing the trustworthiness of AI systems across diverse applications, from autonomous driving to medical diagnosis, by mitigating the risks associated with making predictions on unseen data. A growing emphasis is placed on developing methods that are both effective and computationally efficient, particularly for resource-constrained environments.
Papers
Out-of-Distribution Runtime Adaptation with Conformalized Neural Network Ensembles
Polo Contreras, Ola Shorinwa, Mac Schwager
Continual Unsupervised Out-of-Distribution Detection
Lars Doorenbos, Raphael Sznitman, Pablo Márquez-Neila
Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NAS
Hao Fu, Tunhou Zhang, Hai Li, Yiran Chen
MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities
Hao Dong, Yue Zhao, Eleni Chatzi, Olga Fink
WeiPer: OOD Detection using Weight Perturbations of Class Projections
Maximilian Granz, Manuel Heurich, Tim Landgraf
Reframing the Relationship in Out-of-Distribution Detection
YuXiao Lee, Xiaofeng Cao