Zero Shot Out of Distribution

Zero-shot out-of-distribution (OOD) detection focuses on developing machine learning models that can reliably identify inputs not belonging to the classes seen during training, without requiring any retraining or fine-tuning. Current research heavily utilizes vision-language models like CLIP, along with techniques such as outlier label exposure, prototype learning, and Bayesian scoring, to improve the accuracy and robustness of OOD detection. This area is crucial for deploying machine learning models safely in real-world applications, particularly in safety-critical domains like autonomous driving, where unexpected inputs must be handled correctly, and for enhancing the reliability of general-purpose AI systems. Recent work also highlights the vulnerability of even advanced OOD detection methods to adversarial attacks, underscoring the need for further research into robust and reliable solutions.

Papers