Unsupervised Out of Distribution

Unsupervised out-of-distribution (U-OOD) detection focuses on identifying data points that deviate from the distribution of training data without using labeled examples of these outliers. Current research emphasizes improving the effectiveness of likelihood-based methods, particularly within deep generative models like variational autoencoders, and exploring alternative approaches such as contrastive learning and methods based on data invariants and non-linear transformations. This field is crucial for enhancing the robustness and reliability of deep learning systems, particularly in safety-critical applications like medical image analysis where labeled outlier data is scarce, and has shown promise in improving the accuracy of anomaly detection in various domains.

Papers