Distribution Shift
Distribution shift, the discrepancy between training and deployment data distributions, is a critical challenge in machine learning, hindering model generalization and reliability. Current research focuses on developing methods to detect, adapt to, and mitigate the impact of various shift types (e.g., covariate, concept, label, and performative shifts), employing techniques like data augmentation, model retraining with regularization, and adaptive normalization. These advancements are crucial for improving the robustness and trustworthiness of machine learning models across diverse real-world applications, particularly in safety-critical domains like healthcare and autonomous driving, where unexpected performance degradation can have significant consequences.
Papers
Edit at your own risk: evaluating the robustness of edited models to distribution shifts
Davis Brown, Charles Godfrey, Cody Nizinski, Jonathan Tu, Henry Kvinge
Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation
Gaurav Patel, Konda Reddy Mopuri, Qiang Qiu
Statistical Learning under Heterogeneous Distribution Shift
Max Simchowitz, Anurag Ajay, Pulkit Agrawal, Akshay Krishnamurthy
Evaluating Robustness and Uncertainty of Graph Models Under Structural Distributional Shifts
Gleb Bazhenov, Denis Kuznedelev, Andrey Malinin, Artem Babenko, Liudmila Prokhorenkova