Cross Domain
Cross-domain research focuses on developing methods that allow machine learning models trained on one type of data (e.g., images from one city) to generalize effectively to different, but related, data (e.g., images from another city). Current efforts concentrate on improving model robustness through techniques like adversarial domain adaptation, graph-based feature fusion, and the use of pre-trained models (e.g., LLMs, transformers) to transfer knowledge across domains, often addressing data scarcity or distributional shifts. This work is crucial for building more generalizable and reliable AI systems, impacting diverse fields from autonomous driving and medical image analysis to financial risk assessment and natural language processing. The ultimate goal is to reduce the need for extensive retraining when deploying models in new environments or tasks.
Papers
CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Tianqi Xu, Linyao Chen, Dai-Jie Wu, Yanjun Chen, Zecheng Zhang, Xiang Yao, Zhiqiang Xie, Yongchao Chen, Shilong Liu, Bochen Qian, Anjie Yang, Zhaoxuan Jin, Jianbo Deng, Philip Torr, Bernard Ghanem, Guohao Li
Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation
Nadezhda Chirkova, Vassilina Nikoulina, Jean-Luc Meunier, Alexandre Bérard