Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) tackles the challenge of training machine learning models on labeled data from one domain (source) and applying them effectively to unlabeled data from a different but related domain (target). Current research focuses on improving the robustness and efficiency of UDA, exploring techniques like adversarial training, self-training, and representation learning using architectures such as convolutional neural networks and vision transformers. These advancements are crucial for various applications, including medical image analysis, remote sensing, and time series classification, where obtaining sufficient labeled data for each domain is often impractical or expensive. The development of standardized evaluation frameworks and the exploration of efficient UDA methods for resource-constrained environments are also significant current trends.
Papers
Is Generative Modeling-based Stylization Necessary for Domain Adaptation in Regression Tasks?
Jinman Park, Francois Barnard, Saad Hossain, Sirisha Rambhatla, Paul Fieguth
Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning
Yihong Cao, Hui Zhang, Xiao Lu, Zheng Xiao, Kailun Yang, Yaonan Wang
Attentive Continuous Generative Self-training for Unsupervised Domain Adaptive Medical Image Translation
Xiaofeng Liu, Jerry L. Prince, Fangxu Xing, Jiachen Zhuo, Reese Timothy, Maureen Stone, Georges El Fakhri, Jonghye Woo
Adaptive Face Recognition Using Adversarial Information Network
Mei Wang, Weihong Deng
Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation
Lukas Hoyer, Dengxin Dai, Luc Van Gool
Cluster Entropy: Active Domain Adaptation in Pathological Image Segmentation
Xiaoqing Liu, Kengo Araki, Shota Harada, Akihiko Yoshizawa, Kazuhiro Terada, Mariyo Kurata, Naoki Nakajima, Hiroyuki Abe, Tetsuo Ushiku, Ryoma Bise