Cross Domain
Cross-domain research focuses on developing methods that allow machine learning models trained on one type of data (e.g., images from one city) to generalize effectively to different, but related, data (e.g., images from another city). Current efforts concentrate on improving model robustness through techniques like adversarial domain adaptation, graph-based feature fusion, and the use of pre-trained models (e.g., LLMs, transformers) to transfer knowledge across domains, often addressing data scarcity or distributional shifts. This work is crucial for building more generalizable and reliable AI systems, impacting diverse fields from autonomous driving and medical image analysis to financial risk assessment and natural language processing. The ultimate goal is to reduce the need for extensive retraining when deploying models in new environments or tasks.
Papers
Adapting Self-Supervised Representations to Multi-Domain Setups
Neha Kalibhat, Sam Sharpe, Jeremy Goodsitt, Bayan Bruss, Soheil Feizi
CDFSL-V: Cross-Domain Few-Shot Learning for Videos
Sarinda Samarasinghe, Mamshad Nayeem Rizve, Navid Kardan, Mubarak Shah
Cross-domain Sound Recognition for Efficient Underwater Data Analysis
Jeongsoo Park, Dong-Gyun Han, Hyoung Sul La, Sangmin Lee, Yoonchang Han, Eun-Jin Yang
CasCIFF: A Cross-Domain Information Fusion Framework Tailored for Cascade Prediction in Social Networks
Hongjun Zhu, Shun Yuan, Xin Liu, Kuo Chen, Chaolong Jia, Ying Qian
Learning multi-domain feature relation for visible and Long-wave Infrared image patch matching
Xiuwei Zhang, Yanping Li, Zhaoshuai Qi, Yi Sun, Yanning Zhang