Domain Adaptation
Domain adaptation addresses the challenge of applying machine learning models trained on one dataset (the source domain) to a different dataset with a different distribution (the target domain). Current research focuses on techniques like adversarial training, knowledge distillation, and optimal transport to bridge this domain gap, often employing transformer-based models, generative adversarial networks (GANs), and various meta-learning approaches. This field is crucial for improving the robustness and generalizability of machine learning models across diverse real-world applications, particularly in areas with limited labeled data such as medical imaging, natural language processing for low-resource languages, and personalized recommendation systems. The development of standardized evaluation frameworks is also a growing area of focus to ensure fair comparison and reproducibility of results.
Papers - Page 39
Simple and Scalable Nearest Neighbor Machine Translation
Yuhan Dai, Zhirui Zhang, Qiuzhi Liu, Qu Cui, Weihua Li, Yichao Du, Tong XuDomain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach
Minyoung Kim, Da Li, Timothy HospedalesUnsupervised Domain Adaptation via Distilled Discriminative Clustering
Hui Tang, Yaowei Wang, Kui JiaA Comprehensive Survey on Source-free Domain Adaptation
Zhiqi Yu, Jingjing Li, Zhekai Du, Lei Zhu, Heng Tao Shen
Domain Adaptation for Time Series Under Feature and Label Shifts
Huan He, Owen Queen, Teddy Koker, Consuelo Cuevas, Theodoros Tsiligkaridis, Marinka ZitnikRLSbench: Domain Adaptation Under Relaxed Label Shift
Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Sivaraman Balakrishnan, Zachary C. LiptonDomain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
Zihao Xu, Guang-Yuan Hao, Hao He, Hao Wang