Distribution Shift
Distribution shift, the discrepancy between training and deployment data distributions, is a critical challenge in machine learning, hindering model generalization and reliability. Current research focuses on developing methods to detect, adapt to, and mitigate the impact of various shift types (e.g., covariate, concept, label, and performative shifts), employing techniques like data augmentation, model retraining with regularization, and adaptive normalization. These advancements are crucial for improving the robustness and trustworthiness of machine learning models across diverse real-world applications, particularly in safety-critical domains like healthcare and autonomous driving, where unexpected performance degradation can have significant consequences.
Papers
Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP
Hyuhng Joon Kim, Hyunsoo Cho, Sang-Woo Lee, Junyeob Kim, Choonghyun Park, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
Calibration of Time-Series Forecasting: Detecting and Adapting Context-Driven Distribution Shift
Mouxiang Chen, Lefei Shen, Han Fu, Zhuo Li, Jianling Sun, Chenghao Liu
Test-Time Training for Speech
Sri Harsha Dumpala, Chandramouli Sastry, Sageev Oore
A Configurable Library for Generating and Manipulating Maze Datasets
Michael Igorevich Ivanitskiy, Rusheb Shah, Alex F. Spies, Tilman Räuker, Dan Valentine, Can Rager, Lucia Quirke, Chris Mathwin, Guillaume Corlouer, Cecilia Diniz Behn, Samy Wu Fung