Distribution Shift
Distribution shift, the discrepancy between training and deployment data distributions, is a critical challenge in machine learning, hindering model generalization and reliability. Current research focuses on developing methods to detect, adapt to, and mitigate the impact of various shift types (e.g., covariate, concept, label, and performative shifts), employing techniques like data augmentation, model retraining with regularization, and adaptive normalization. These advancements are crucial for improving the robustness and trustworthiness of machine learning models across diverse real-world applications, particularly in safety-critical domains like healthcare and autonomous driving, where unexpected performance degradation can have significant consequences.
Papers
Estimating Uncertainty For Vehicle Motion Prediction on Yandex Shifts Dataset
Alexey Pustynnikov, Dmitry Eremeev
Know Thy Strengths: Comprehensive Dialogue State Tracking Diagnostics
Hyundong Cho, Chinnadhurai Sankar, Christopher Lin, Kaushik Ram Sadagopan, Shahin Shayandeh, Asli Celikyilmaz, Jonathan May, Ahmad Beirami
A benchmark with decomposed distribution shifts for 360 monocular depth estimation
Georgios Albanis, Nikolaos Zioulis, Petros Drakoulis, Federico Alvarez, Dimitrios Zarpalas, Petros Daras
Wiki to Automotive: Understanding the Distribution Shift and its impact on Named Entity Recognition
Anmol Nayak, Hari Prasad Timmapathini