Domain Shift
Domain shift, the discrepancy between training and deployment data distributions, significantly degrades machine learning model performance. Current research focuses on developing robust algorithms and model architectures, such as U-Nets, Swin Transformers, and diffusion models, to mitigate this issue through techniques like distribution alignment, adversarial training, and knowledge distillation. These efforts are crucial for improving the reliability and generalizability of machine learning models across diverse real-world applications, particularly in medical imaging, autonomous driving, and natural language processing, where data heterogeneity is common. The ultimate goal is to create models that generalize effectively to unseen data, reducing the need for extensive retraining and improving the practical impact of AI systems.
Papers
Look, Learn and Leverage (L$^3$): Mitigating Visual-Domain Shift and Discovering Intrinsic Relations via Symbolic Alignment
Hanchen Xie, Jiageng Zhu, Mahyar Khayatkhoei, Jiazhi Li, Wael AbdAlmageed
BTMuda: A Bi-level Multi-source unsupervised domain adaptation framework for breast cancer diagnosis
Yuxiang Yang, Xinyi Zeng, Pinxian Zeng, Binyu Yan, Xi Wu, Jiliu Zhou, Yan Wang
Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams
Ziqiang Wang, Zhixiang Chi, Yanan Wu, Li Gu, Zhi Liu, Konstantinos Plataniotis, Yang Wang
The Devil is in the Statistics: Mitigating and Exploiting Statistics Difference for Generalizable Semi-supervised Medical Image Segmentation
Muyang Qiu, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao