Distribution Shift
Distribution shift, the discrepancy between training and deployment data distributions, is a critical challenge in machine learning, hindering model generalization and reliability. Current research focuses on developing methods to detect, adapt to, and mitigate the impact of various shift types (e.g., covariate, concept, label, and performative shifts), employing techniques like data augmentation, model retraining with regularization, and adaptive normalization. These advancements are crucial for improving the robustness and trustworthiness of machine learning models across diverse real-world applications, particularly in safety-critical domains like healthcare and autonomous driving, where unexpected performance degradation can have significant consequences.
Papers
Double Descent and Overfitting under Noisy Inputs and Distribution Shift for Linear Denoisers
Chinmaya Kausik, Kashvi Srivastava, Rishi Sonthalia
Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup
Damien Teney, Jindong Wang, Ehsan Abbasnejad
A Closer Look at In-Context Learning under Distribution Shifts
Kartik Ahuja, David Lopez-Paz
Rectifying Group Irregularities in Explanations for Distribution Shift
Adam Stein, Yinjun Wu, Eric Wong, Mayur Naik
Characterizing Out-of-Distribution Error via Optimal Transport
Yuzhe Lu, Yilong Qin, Runtian Zhai, Andrew Shen, Ketong Chen, Zhenlin Wang, Soheil Kolouri, Simon Stepputtis, Joseph Campbell, Katia Sycara