Domain Shift
Domain shift, the discrepancy between training and deployment data distributions, significantly degrades machine learning model performance. Current research focuses on developing robust algorithms and model architectures, such as U-Nets, Swin Transformers, and diffusion models, to mitigate this issue through techniques like distribution alignment, adversarial training, and knowledge distillation. These efforts are crucial for improving the reliability and generalizability of machine learning models across diverse real-world applications, particularly in medical imaging, autonomous driving, and natural language processing, where data heterogeneity is common. The ultimate goal is to create models that generalize effectively to unseen data, reducing the need for extensive retraining and improving the practical impact of AI systems.
Papers
SFHarmony: Source Free Domain Adaptation for Distributed Neuroimaging Analysis
Nicola K Dinsdale, Mark Jenkinson, Ana IL Namburete
Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning
Wonguk Cho, Jinha Park, Taesup Kim
MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation
Ziyuan Zhao, Kaixin Xu, Huai Zhe Yeo, Xulei Yang, Cuntai Guan