Domain Invariant Feature
Domain-invariant feature learning aims to extract features from data that are robust to variations across different domains or datasets, enabling models trained on one domain to generalize effectively to others. Current research focuses on developing algorithms and model architectures, such as adversarial learning, contrastive learning, and various autoencoder variations, to achieve this domain invariance, often incorporating techniques like test-time adaptation and multi-task learning. This research is significant because it addresses the critical challenge of data heterogeneity in machine learning, improving the robustness and generalizability of models across diverse applications, including medical image analysis, object detection, and natural language processing. The resulting domain-invariant representations enhance model performance and reduce the need for extensive retraining when encountering new data distributions.
Papers
Disentangling Masked Autoencoders for Unsupervised Domain Generalization
An Zhang, Han Wang, Xiang Wang, Tat-Seng Chua
Unity in Diversity: Multi-expert Knowledge Confrontation and Collaboration for Generalizable Vehicle Re-identification
Zhenyu Kuang, Hongyang Zhang, Lidong Cheng, Yinhao Liu, Yue Huang, Xinghao Ding
Grounding Stylistic Domain Generalization with Quantitative Domain Shift Measures and Synthetic Scene Images
Yiran Luo, Joshua Feinglass, Tejas Gokhale, Kuan-Cheng Lee, Chitta Baral, Yezhou Yang
Unbiased Faster R-CNN for Single-source Domain Generalized Object Detection
Yajing Liu, Shijun Zhou, Xiyao Liu, Chunhui Hao, Baojie Fan, Jiandong Tian