Cross Domain
Cross-domain research focuses on developing methods that allow machine learning models trained on one type of data (e.g., images from one city) to generalize effectively to different, but related, data (e.g., images from another city). Current efforts concentrate on improving model robustness through techniques like adversarial domain adaptation, graph-based feature fusion, and the use of pre-trained models (e.g., LLMs, transformers) to transfer knowledge across domains, often addressing data scarcity or distributional shifts. This work is crucial for building more generalizable and reliable AI systems, impacting diverse fields from autonomous driving and medical image analysis to financial risk assessment and natural language processing. The ultimate goal is to reduce the need for extensive retraining when deploying models in new environments or tasks.
Papers
Heterogeneous Target Speech Separation
Efthymios Tzinis, Gordon Wichern, Aswin Subramanian, Paris Smaragdis, Jonathan Le Roux
Measuring AI Systems Beyond Accuracy
Violet Turri, Rachel Dzombak, Eric Heim, Nathan VanHoudnos, Jay Palat, Anusha Sinha
Exploring Cross-Domain Pretrained Model for Hyperspectral Image Classification
Hyungtae Lee, Sungmin Eum, Heesung Kwon