Cross Domain
Cross-domain research focuses on developing methods that allow machine learning models trained on one type of data (e.g., images from one city) to generalize effectively to different, but related, data (e.g., images from another city). Current efforts concentrate on improving model robustness through techniques like adversarial domain adaptation, graph-based feature fusion, and the use of pre-trained models (e.g., LLMs, transformers) to transfer knowledge across domains, often addressing data scarcity or distributional shifts. This work is crucial for building more generalizable and reliable AI systems, impacting diverse fields from autonomous driving and medical image analysis to financial risk assessment and natural language processing. The ultimate goal is to reduce the need for extensive retraining when deploying models in new environments or tasks.
Papers
Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction
Yueling Li, Sebastian Martschat, Simone Paolo Ponzetto
Cross-modal & Cross-domain Learning for Unsupervised LiDAR Semantic Segmentation
Yiyang Chen, Shanshan Zhao, Changxing Ding, Liyao Tang, Chaoyue Wang, Dacheng Tao