Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) tackles the challenge of training machine learning models on labeled data from one domain (source) and applying them effectively to unlabeled data from a different but related domain (target). Current research focuses on improving the robustness and efficiency of UDA, exploring techniques like adversarial training, self-training, and representation learning using architectures such as convolutional neural networks and vision transformers. These advancements are crucial for various applications, including medical image analysis, remote sensing, and time series classification, where obtaining sufficient labeled data for each domain is often impractical or expensive. The development of standardized evaluation frameworks and the exploration of efficient UDA methods for resource-constrained environments are also significant current trends.
Papers
Gradual Domain Adaptation: Theory and Algorithms
Yifei He, Haoxiang Wang, Bo Li, Han Zhao
Weighted Joint Maximum Mean Discrepancy Enabled Multi-Source-Multi-Target Unsupervised Domain Adaptation Fault Diagnosis
Zixuan Wang, Haoran Tang, Haibo Wang, Bo Qin, Mark D. Butala, Weiming Shen, Hongwei Wang
Neural domain alignment for spoken language recognition based on optimal transport
Xugang Lu, Peng Shen, Yu Tsao, Hisashi Kawai