Domain Invariant Representation
Domain-invariant representation learning aims to create machine learning models that generalize well across different data distributions (domains), overcoming the limitations of models trained on a single, specific dataset. Current research focuses on developing techniques, often employing contrastive learning, autoencoders, and transformer-based architectures, to extract features that are robust to domain shifts, leveraging both supervised and unsupervised learning paradigms, including self-training and pseudo-labeling. This field is crucial for improving the reliability and applicability of machine learning models in real-world scenarios where data heterogeneity is common, impacting diverse areas such as medical image analysis, autonomous driving, and bioacoustic monitoring. The ultimate goal is to build more robust and generalizable AI systems.
Papers
DGM-DR: Domain Generalization with Mutual Information Regularized Diabetic Retinopathy Classification
Aleksandr Matsun, Dana O. Mohamed, Sharon Chokuwa, Muhammad Ridzuan, Mohammad Yaqub
DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues
Kun Pan, Yin Yifang, Yao Wei, Feng Lin, Zhongjie Ba, Zhenguang Liu, ZhiBo Wang, Lorenzo Cavallaro, Kui Ren