Domain Generalization
Domain generalization (DG) aims to train machine learning models that perform well on unseen data, overcoming the limitations of models trained and tested on similar data distributions. Current research focuses on improving model robustness through techniques like self-supervised learning, data augmentation (including novel methods like style prompting and spectrum synthesis), and the use of foundation models and parameter-efficient fine-tuning. These advancements are crucial for deploying reliable AI systems in real-world applications where data variability is inevitable, particularly in fields like medical imaging, autonomous systems, and natural language processing.
Papers
Lifelong Learning Using a Dynamically Growing Tree of Sub-networks for Domain Generalization in Video Object Segmentation
Islam Osman, Mohamed S. Shehata
Clustering-Based Validation Splits for Model Selection under Domain Shift
Andrea Napoli, Paul White
Domain-Inspired Sharpness-Aware Minimization Under Domain Shifts
Ruipeng Zhang, Ziqing Fan, Jiangchao Yao, Ya Zhang, Yanfeng Wang
DGMamba: Domain Generalization via Generalized State Space Model
Shaocong Long, Qianyu Zhou, Xiangtai Li, Xuequan Lu, Chenhao Ying, Yuan Luo, Lizhuang Ma, Shuicheng Yan
PromptSync: Bridging Domain Gaps in Vision-Language Models through Class-Aware Prototype Alignment and Discrimination
Anant Khandelwal