Cross Modality
Cross-modality research focuses on integrating information from different data types (e.g., images, text, audio) to improve model performance and understanding. Current research emphasizes developing robust methods for handling inconsistencies between modalities, particularly using techniques like contrastive learning, generative adversarial networks (GANs), and vision transformers, often within frameworks of unsupervised domain adaptation or self-training. This field is significant for advancing medical image analysis (e.g., improved segmentation and diagnosis), autonomous driving, and other applications requiring the fusion of heterogeneous data sources, ultimately leading to more accurate and reliable systems.
Papers
Self-semantic contour adaptation for cross modality brain tumor segmentation
Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Unsupervised Domain Adaptation for Cross-Modality Retinal Vessel Segmentation via Disentangling Representation Style Transfer and Collaborative Consistency Learning
Linkai Peng, Li Lin, Pujin Cheng, Ziqi Huang, Xiaoying Tang