Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) tackles the challenge of training machine learning models on labeled data from one domain (source) and applying them effectively to unlabeled data from a different but related domain (target). Current research focuses on improving the robustness and efficiency of UDA, exploring techniques like adversarial training, self-training, and representation learning using architectures such as convolutional neural networks and vision transformers. These advancements are crucial for various applications, including medical image analysis, remote sensing, and time series classification, where obtaining sufficient labeled data for each domain is often impractical or expensive. The development of standardized evaluation frameworks and the exploration of efficient UDA methods for resource-constrained environments are also significant current trends.
Papers
Modeling Hierarchical Structural Distance for Unsupervised Domain Adaptation
Yingxue Xu, Guihua Wen, Yang Hu, Pei Yang
Boosting Novel Category Discovery Over Domains with Soft Contrastive Learning and All-in-One Classifier
Zelin Zang, Lei Shang, Senqiao Yang, Fei Wang, Baigui Sun, Xuansong Xie, Stan Z. Li
AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation
Xinyao Shu, Shiyang Yan, Zhenyu Lu, Xinshao Wang, Yuan Xie
ELDA: Using Edges to Have an Edge on Semantic Segmentation Based UDA
Ting-Hsuan Liao, Huang-Ru Liao, Shan-Ya Yang, Jie-En Yao, Li-Yuan Tsao, Hsu-Shen Liu, Bo-Wun Cheng, Chen-Hao Chao, Chia-Che Chang, Yi-Chen Lo, Chun-Yi Lee
Unsupervised Domain Adaptation Based on the Predictive Uncertainty of Models
JoonHo Lee, Gyemin Lee