Domain Shift
Domain shift, the discrepancy between training and deployment data distributions, significantly degrades machine learning model performance. Current research focuses on developing robust algorithms and model architectures, such as U-Nets, Swin Transformers, and diffusion models, to mitigate this issue through techniques like distribution alignment, adversarial training, and knowledge distillation. These efforts are crucial for improving the reliability and generalizability of machine learning models across diverse real-world applications, particularly in medical imaging, autonomous driving, and natural language processing, where data heterogeneity is common. The ultimate goal is to create models that generalize effectively to unseen data, reducing the need for extensive retraining and improving the practical impact of AI systems.
Papers
Effective Restoration of Source Knowledge in Continual Test Time Adaptation
Fahim Faisal Niloy, Sk Miraj Ahmed, Dripta S. Raychaudhuri, Samet Oymak, Amit K. Roy-Chowdhury
Robust and Communication-Efficient Federated Domain Adaptation via Random Features
Zhanbo Feng, Yuanjie Wang, Jie Li, Fan Yang, Jiong Lou, Tiebin Mi, Robert. C. Qiu, Zhenyu Liao