Cross Domain
Cross-domain research focuses on developing methods that allow machine learning models trained on one type of data (e.g., images from one city) to generalize effectively to different, but related, data (e.g., images from another city). Current efforts concentrate on improving model robustness through techniques like adversarial domain adaptation, graph-based feature fusion, and the use of pre-trained models (e.g., LLMs, transformers) to transfer knowledge across domains, often addressing data scarcity or distributional shifts. This work is crucial for building more generalizable and reliable AI systems, impacting diverse fields from autonomous driving and medical image analysis to financial risk assessment and natural language processing. The ultimate goal is to reduce the need for extensive retraining when deploying models in new environments or tasks.
Papers
IP-MOT: Instance Prompt Learning for Cross-Domain Multi-Object Tracking
Run Luo, Zikai Song, Longze Chen, Yunshui Li, Min Yang, Wei Yang
CrossEarth: Geospatial Vision Foundation Model for Domain Generalizable Remote Sensing Semantic Segmentation
Ziyang Gong, Zhixiang Wei, Di Wang, Xianzheng Ma, Hongruixuan Chen, Yuru Jia, Yupeng Deng, Zhenming Ji, Xiangwei Zhu, Naoto Yokoya, Jing Zhang, Bo Du, Liangpei Zhang
OPONeRF: One-Point-One NeRF for Robust Neural Rendering
Yu Zheng, Yueqi Duan, Kangfu Zheng, Hongru Yan, Jiwen Lu, Jie Zhou
Law of the Weakest Link: Cross Capabilities of Large Language Models
Ming Zhong, Aston Zhang, Xuewei Wang, Rui Hou, Wenhan Xiong, Chenguang Zhu, Zhengxing Chen, Liang Tan, Chloe Bi, Mike Lewis, Sravya Popuri, Sharan Narang, Melanie Kambadur, Dhruv Mahajan, Sergey Edunov, Jiawei Han, Laurens van der Maaten