Unsupervised Out of Distribution Detection
Unsupervised out-of-distribution (OOD) detection aims to identify data points that deviate from the distribution of training data without using labeled OOD examples. Current research focuses on improving the effectiveness of likelihood-based methods, exploring generative models like variational autoencoders and diffusion models, and developing novel algorithms such as contrastive learning and methods based on data invariants and non-linear transformations to better capture OOD samples. This field is crucial for building robust and reliable machine learning systems, particularly in safety-critical applications like medical image analysis and robotics, where unseen data can have significant consequences. The development of effective OOD detection techniques is essential for ensuring the trustworthiness and safety of deployed AI systems.