Distribution Shift
Distribution shift, the discrepancy between training and deployment data distributions, is a critical challenge in machine learning, hindering model generalization and reliability. Current research focuses on developing methods to detect, adapt to, and mitigate the impact of various shift types (e.g., covariate, concept, label, and performative shifts), employing techniques like data augmentation, model retraining with regularization, and adaptive normalization. These advancements are crucial for improving the robustness and trustworthiness of machine learning models across diverse real-world applications, particularly in safety-critical domains like healthcare and autonomous driving, where unexpected performance degradation can have significant consequences.
Papers
Considerations for Distribution Shift Robustness of Diagnostic Models in Healthcare
Arno Blaas, Adam Goliński, Andrew Miller, Luca Zappella, Jörn-Henrik Jacobsen, Christina Heinze-Deml
A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation
Kexin Zhang, Shuhan Liu, Song Wang, Weili Shi, Chen Chen, Pan Li, Sheng Li, Jundong Li, Kaize Ding
Crafting Distribution Shifts for Validation and Training in Single Source Domain Generalization
Nikos Efthymiadis, Giorgos Tolias, Ondřej Chum
Evolving Multi-Scale Normalization for Time Series Forecasting under Distribution Shifts
Dalin Qin, Yehui Li, Weiqi Chen, Zhaoyang Zhu, Qingsong Wen, Liang Sun, Pierre Pinson, Yi Wang