Domain Generalization
Domain generalization (DG) aims to train machine learning models that perform well on unseen data, overcoming the limitations of models trained and tested on similar data distributions. Current research focuses on improving model robustness through techniques like self-supervised learning, data augmentation (including novel methods like style prompting and spectrum synthesis), and the use of foundation models and parameter-efficient fine-tuning. These advancements are crucial for deploying reliable AI systems in real-world applications where data variability is inevitable, particularly in fields like medical imaging, autonomous systems, and natural language processing.
Papers
Towards Combating Frequency Simplicity-biased Learning for Domain Generalization
Xilin He, Jingyu Hu, Qinliang Lin, Cheng Luo, Weicheng Xie, Siyang Song, Muhammad Haris Khan, Linlin Shen
START: A Generalized State Space Model with Saliency-Driven Token-Aware Transformation
Jintao Guo, Lei Qi, Yinghuan Shi, Yang Gao
FlickerFusion: Intra-trajectory Domain Generalizing Multi-Agent RL
Woosung Koh, Wonbeen Oh, Siyeol Kim, Suhin Shin, Hyeongjin Kim, Jaein Jang, Junghyun Lee, Se-Young Yun
DomainSum: A Hierarchical Benchmark for Fine-Grained Domain Shift in Abstractive Text Summarization
Haohan Yuan, Haopeng Zhang
Can Large Language Models Invent Algorithms to Improve Themselves?
Yoichi Ishibashi, Taro Yano, Masafumi Oyamada