Paper ID: 2311.15090
Fine-Grained Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation
Luyi Han, Tao Tan, Ritse Mann
The domain adaptation approach has gained significant acceptance in transferring styles across various vendors and centers, along with filling the gaps in modalities. However, multi-center application faces the challenge of the difficulty of domain adaptation due to their intra-domain differences. We focus on introducing a fine-grained unsupervised framework for domain adaptation to facilitate cross-modality segmentation of vestibular schwannoma (VS) and cochlea. We propose to use a vector to control the generator to synthesize a fake image with given features. And then, we can apply various augmentations to the dataset by searching the feature dictionary. The diversity augmentation can increase the performance and robustness of the segmentation model. On the CrossMoDA validation phase Leaderboard, our method received a mean Dice score of 0.765 and 0.836 on VS and cochlea, respectively.
Submitted: Nov 25, 2023