Class Out of Distribution

Class Out of Distribution (OOD) research focuses on improving the robustness of machine learning models by enabling them to reliably handle data that differs significantly from their training distribution. Current research emphasizes developing methods for OOD detection and improving model generalization across various data types (tabular, image, text) and tasks (classification, regression, reinforcement learning), often employing techniques like uncertainty quantification, self-supervised learning, and data augmentation strategies. This work is crucial for deploying reliable AI systems in real-world applications where encountering unseen data is inevitable, impacting fields such as healthcare, robotics, and cybersecurity.

Papers