Distribution Detection
Out-of-distribution (OOD) detection aims to identify data points that differ significantly from a machine learning model's training data, crucial for ensuring reliable and safe model deployment. Current research focuses on developing novel scoring functions and model architectures, including those based on diffusion models, variational autoencoders, and vision-language models, to improve the accuracy and efficiency of OOD detection, often addressing challenges posed by imbalanced datasets and limited access to model parameters. This field is vital for enhancing the trustworthiness of AI systems across diverse applications, from autonomous driving to medical diagnosis, by mitigating the risks associated with making predictions on unseen data. A growing emphasis is placed on developing methods that are both effective and computationally efficient, particularly for resource-constrained environments.
Papers
Making the Flow Glow -- Robot Perception under Severe Lighting Conditions using Normalizing Flow Gradients
Simon Kristoffersson Lind, Rudolph Triebel, Volker Krüger
Quantifying the Prediction Uncertainty of Machine Learning Models for Individual Data
Koby Bibas
EDGE: Unknown-aware Multi-label Learning by Energy Distribution Gap Expansion
Yuchen Sun, Qianqian Xu, Zitai Wang, Zhiyong Yang, Junwei He
Taylor Outlier Exposure
Kohei Fukuda, Hiroaki Aizawa
FEVER-OOD: Free Energy Vulnerability Elimination for Robust Out-of-Distribution Detection
Brian K.S. Isaac-Medina, Mauricio Che, Yona F.A. Gaus, Samet Akcay, Toby P. Breckon
NCDD: Nearest Centroid Distance Deficit for Out-Of-Distribution Detection in Gastrointestinal Vision
Sandesh Pokhrel, Sanjay Bhandari, Sharib Ali, Tryphon Lambrou, Anh Nguyen, Yash Raj Shrestha, Angus Watson, Danail Stoyanov, Prashnna Gyawali, Binod Bhattarai