Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) tackles the challenge of training machine learning models on labeled data from one domain (source) and applying them effectively to unlabeled data from a different but related domain (target). Current research focuses on improving the robustness and efficiency of UDA, exploring techniques like adversarial training, self-training, and representation learning using architectures such as convolutional neural networks and vision transformers. These advancements are crucial for various applications, including medical image analysis, remote sensing, and time series classification, where obtaining sufficient labeled data for each domain is often impractical or expensive. The development of standardized evaluation frameworks and the exploration of efficient UDA methods for resource-constrained environments are also significant current trends.
Papers
Unsupervised Domain Adaptation for Segmentation with Black-box Source Model
Xiaofeng Liu, Chaehwa Yoo, Fangxu Xing, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Subtype-Aware Dynamic Unsupervised Domain Adaptation
Xiaofeng Liu, Fangxu Xing, Jia You, Jun Lu, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level prototype memory
Jingru Zhu, Ya Guo, Geng Sun, Libo Yang, Min Deng, Jie Chen
Deep Unsupervised Domain Adaptation: A Review of Recent Advances and Perspectives
Xiaofeng Liu, Chaehwa Yoo, Fangxu Xing, Hyejin Oh, Georges El Fakhri, Je-Won Kang, Jonghye Woo
Three New Validators and a Large-Scale Benchmark Ranking for Unsupervised Domain Adaptation
Kevin Musgrave, Serge Belongie, Ser-Nam Lim