Distribution Shift
Distribution shift, the discrepancy between training and deployment data distributions, is a critical challenge in machine learning, hindering model generalization and reliability. Current research focuses on developing methods to detect, adapt to, and mitigate the impact of various shift types (e.g., covariate, concept, label, and performative shifts), employing techniques like data augmentation, model retraining with regularization, and adaptive normalization. These advancements are crucial for improving the robustness and trustworthiness of machine learning models across diverse real-world applications, particularly in safety-critical domains like healthcare and autonomous driving, where unexpected performance degradation can have significant consequences.
Papers
Validity Learning on Failures: Mitigating the Distribution Shift in Autonomous Vehicle Planning
Fazel Arasteh, Mohammed Elmahgiubi, Behzad Khamidehi, Hamidreza Mirkhani, Weize Zhang, Cao Tongtong, Kasra Rezaee
Adapting Conformal Prediction to Distribution Shifts Without Labels
Kevin Kasa, Zhiyu Zhang, Heng Yang, Graham W. Taylor
Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph
Weihuang Zheng, Jiashuo Liu, Jiaxing Li, Jiayun Wu, Peng Cui, Youyong Kong