Distribution Detection
Out-of-distribution (OOD) detection aims to identify data points that differ significantly from a machine learning model's training data, crucial for ensuring reliable and safe model deployment. Current research focuses on developing novel scoring functions and model architectures, including those based on diffusion models, variational autoencoders, and vision-language models, to improve the accuracy and efficiency of OOD detection, often addressing challenges posed by imbalanced datasets and limited access to model parameters. This field is vital for enhancing the trustworthiness of AI systems across diverse applications, from autonomous driving to medical diagnosis, by mitigating the risks associated with making predictions on unseen data. A growing emphasis is placed on developing methods that are both effective and computationally efficient, particularly for resource-constrained environments.
Papers
OpenOOD: Benchmarking Generalized Out-of-Distribution Detection
Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, Ziwei Liu
Exploiting Mixed Unlabeled Data for Detecting Samples of Seen and Unseen Out-of-Distribution Classes
Yi-Xuan Sun, Wei Wang
Out-of-Distribution Detection and Selective Generation for Conditional Language Models
Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, Peter J. Liu
A Novel Explainable Out-of-Distribution Detection Approach for Spiking Neural Networks
Aitor Martinez Seras, Javier Del Ser, Jesus L. Lobo, Pablo Garcia-Bringas, Nikola Kasabov
Your Out-of-Distribution Detection Method is Not Robust!
Mohammad Azizmalayeri, Arshia Soltani Moakhar, Arman Zarei, Reihaneh Zohrabi, Mohammad Taghi Manzuri, Mohammad Hossein Rohban