Domain Adaptation
Domain adaptation addresses the challenge of applying machine learning models trained on one dataset (the source domain) to a different dataset with a different distribution (the target domain). Current research focuses on techniques like adversarial training, knowledge distillation, and optimal transport to bridge this domain gap, often employing transformer-based models, generative adversarial networks (GANs), and various meta-learning approaches. This field is crucial for improving the robustness and generalizability of machine learning models across diverse real-world applications, particularly in areas with limited labeled data such as medical imaging, natural language processing for low-resource languages, and personalized recommendation systems. The development of standardized evaluation frameworks is also a growing area of focus to ensure fair comparison and reproducibility of results.
Papers
Building Manufacturing Deep Learning Models with Minimal and Imbalanced Training Data Using Domain Adaptation and Data Augmentation
Adrian Shuai Li, Elisa Bertino, Rih-Teng Wu, Ting-Yan Wu
Hypothesis Transfer Learning with Surrogate Classification Losses: Generalization Bounds through Algorithmic Stability
Anass Aghbalou, Guillaume Staerman
Deep into The Domain Shift: Transfer Learning through Dependence Regularization
Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, Zhixiang Huang
ELSA: Efficient Label Shift Adaptation through the Lens of Semiparametric Models
Qinglong Tian, Xin Zhang, Jiwei Zhao
Convolutional Monge Mapping Normalization for learning on sleep data
Théo Gnassounou, Rémi Flamary, Alexandre Gramfort
Can We Evaluate Domain Adaptation Models Without Target-Domain Labels?
Jianfei Yang, Hanjie Qian, Yuecong Xu, Kai Wang, Lihua Xie
Coping with low data availability for social media crisis message categorisation
Congcong Wang
NeuroX Library for Neuron Analysis of Deep NLP Models
Fahim Dalvi, Hassan Sajjad, Nadir Durrani
Extremely weakly-supervised blood vessel segmentation with physiologically based synthesis and domain adaptation
Peidi Xu, Olga Sosnovtseva, Charlotte Mehlin Sørensen, Kenny Erleben, Sune Darkner
Modular Domain Adaptation for Conformer-Based Streaming ASR
Qiujia Li, Bo Li, Dongseong Hwang, Tara N. Sainath, Pedro M. Mengibar
Feasibility of Transfer Learning: A Mathematical Framework
Haoyang Cao, Haotian Gu, Xin Guo
TADA: Efficient Task-Agnostic Domain Adaptation for Transformers
Chia-Chien Hung, Lukas Lange, Jannik Strötgen
Imbalance-Agnostic Source-Free Domain Adaptation via Avatar Prototype Alignment
Hongbin Lin, Mingkui Tan, Yifan Zhang, Zhen Qiu, Shuaicheng Niu, Dong Liu, Qing Du, Yanxia Liu
NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification
Iyanuoluwa Shode, David Ifeoluwa Adelani, Jing Peng, Anna Feldman
Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
Yichen Zhang, Jiehong Lin, Ke Chen, Zelin Xu, Yaowei Wang, Kui Jia