Domain Adaptation
Domain adaptation addresses the challenge of applying machine learning models trained on one dataset (the source domain) to a different dataset with a different distribution (the target domain). Current research focuses on techniques like adversarial training, knowledge distillation, and optimal transport to bridge this domain gap, often employing transformer-based models, generative adversarial networks (GANs), and various meta-learning approaches. This field is crucial for improving the robustness and generalizability of machine learning models across diverse real-world applications, particularly in areas with limited labeled data such as medical imaging, natural language processing for low-resource languages, and personalized recommendation systems. The development of standardized evaluation frameworks is also a growing area of focus to ensure fair comparison and reproducibility of results.
Papers
Domain adaptation using optimal transport for invariant learning using histopathology datasets
Kianoush Falahkheirkhah, Alex Lu, David Alvarez-Melis, Grace Huynh
Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level
Sebastian Huch, Luca Scalerandi, Esteban Rivera, Markus Lienkamp
Exploiting Language Relatedness in Machine Translation Through Domain Adaptation Techniques
Amit Kumar, Rupjyoti Baruah, Ajay Pratap, Mayank Swarnkar, Anil Kumar Singh
Cluster-Guided Semi-Supervised Domain Adaptation for Imbalanced Medical Image Classification
Shota Harada, Ryoma Bise, Kengo Araki, Akihiko Yoshizawa, Kazuhiro Terada, Mariyo Kurata, Naoki Nakajima, Hiroyuki Abe, Tetsuo Ushiku, Seiichi Uchida
UZH_CLyp at SemEval-2023 Task 9: Head-First Fine-Tuning and ChatGPT Data Generation for Cross-Lingual Learning in Tweet Intimacy Prediction
Andrianos Michail, Stefanos Konstantinou, Simon Clematide
Domain Adaptation of Reinforcement Learning Agents based on Network Service Proximity
Kaushik Dey, Satheesh K. Perepu, Pallab Dasgupta, Abir Das
Target Domain Data induces Negative Transfer in Mixed Domain Training with Disjoint Classes
Eryk Banatt, Vickram Rajendran, Liam Packer
UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, Christopher Potts
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation
Rehan Ahmad, Md Asif Jalal, Muhammad Umar Farooq, Anna Ollerenshaw, Thomas Hain
Domain-adapted large language models for classifying nuclear medicine reports
Zachary Huemann, Changhee Lee, Junjie Hu, Steve Y. Cho, Tyler Bradshaw
Simple and Scalable Nearest Neighbor Machine Translation
Yuhan Dai, Zhirui Zhang, Qiuzhi Liu, Qu Cui, Weihua Li, Yichao Du, Tong Xu
Domain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach
Minyoung Kim, Da Li, Timothy Hospedales
Unsupervised Domain Adaptation via Distilled Discriminative Clustering
Hui Tang, Yaowei Wang, Kui Jia
A Comprehensive Survey on Source-free Domain Adaptation
Zhiqi Yu, Jingjing Li, Zhekai Du, Lei Zhu, Heng Tao Shen