Unsupervised Cross Modality Domain Adaptation

Unsupervised cross-modality domain adaptation tackles the challenge of training models to perform tasks (like image segmentation or object recognition) on a target modality (e.g., event-based data or high-resolution T2 MRI) using only labeled data from a different source modality (e.g., images or contrast-enhanced T1 MRI). Current research focuses on techniques like image-to-image translation, self-supervised learning with pseudo-label generation and filtering, and leveraging pre-trained vision-language models to bridge the modality gap. This field is significant because it reduces the reliance on expensive and time-consuming data annotation, enabling the application of deep learning to diverse and limited datasets in areas such as medical imaging and robotics.

Papers