Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) tackles the challenge of training machine learning models on labeled data from one domain (source) and applying them effectively to unlabeled data from a different but related domain (target). Current research focuses on improving the robustness and efficiency of UDA, exploring techniques like adversarial training, self-training, and representation learning using architectures such as convolutional neural networks and vision transformers. These advancements are crucial for various applications, including medical image analysis, remote sensing, and time series classification, where obtaining sufficient labeled data for each domain is often impractical or expensive. The development of standardized evaluation frameworks and the exploration of efficient UDA methods for resource-constrained environments are also significant current trends.
Papers
Noise transfer for unsupervised domain adaptation of retinal OCT images
Valentin Koch, Olle Holmberg, Hannah Spitzer, Johannes Schiefelbein, Ben Asani, Michael Hafner, Fabian J Theis
Memory Consistent Unsupervised Off-the-Shelf Model Adaptation for Source-Relaxed Medical Image Segmentation
Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Deliberated Domain Bridging for Domain Adaptive Semantic Segmentation
Lin Chen, Zhixiang Wei, Xin Jin, Huaian Chen, Miao Zheng, Kai Chen, Yi Jin