Domain Shift
Domain shift, the discrepancy between training and deployment data distributions, significantly degrades machine learning model performance. Current research focuses on developing robust algorithms and model architectures, such as U-Nets, Swin Transformers, and diffusion models, to mitigate this issue through techniques like distribution alignment, adversarial training, and knowledge distillation. These efforts are crucial for improving the reliability and generalizability of machine learning models across diverse real-world applications, particularly in medical imaging, autonomous driving, and natural language processing, where data heterogeneity is common. The ultimate goal is to create models that generalize effectively to unseen data, reducing the need for extensive retraining and improving the practical impact of AI systems.
Papers
Building blocks for complex tasks: Robust generative event extraction for radiology reports under domain shifts
Sitong Zhou, Meliha Yetisgen, Mari Ostendorf
Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data
Daniel Kreuter, Samuel Tull, Julian Gilbey, Jacobus Preller, BloodCounts! Consortium, John A. D. Aston, James H. F. Rudd, Suthesh Sivapalaratnam, Carola-Bibiane Schönlieb, Nicholas Gleadall, Michael Roberts
On the Robustness of Arabic Speech Dialect Identification
Peter Sullivan, AbdelRahim Elmadany, Muhammad Abdul-Mageed
Universal Test-time Adaptation through Weight Ensembling, Diversity Weighting, and Prior Correction
Robert A. Marsden, Mario Döbler, Bin Yang
FACT: Federated Adversarial Cross Training
Stefan Schrod, Jonas Lippl, Andreas Schäfer, Michael Altenbuchinger
Measuring the Robustness of NLP Models to Domain Shifts
Nitay Calderon, Naveh Porat, Eyal Ben-David, Alexander Chapanin, Zorik Gekhman, Nadav Oved, Vitaly Shalumov, Roi Reichart
Domain knowledge-informed Synthetic fault sample generation with Health Data Map for cross-domain Planetary Gearbox Fault Diagnosis
Jong Moon Ha, Olga Fink
Deep into The Domain Shift: Transfer Learning through Dependence Regularization
Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, Zhixiang Huang