Paper ID: 2207.12995

Exploring Generalizable Distillation for Efficient Medical Image Segmentation

Xingqun Qi, Zhuojie Wu, Min Ren, Muyi Sun, Caifeng Shan, Zhenan Sun

Efficient medical image segmentation aims to provide accurate pixel-wise predictions for medical images with a lightweight implementation framework. However, lightweight frameworks generally fail to achieve superior performance and suffer from poor generalizable ability on cross-domain tasks. In this paper, we explore the generalizable knowledge distillation for the efficient segmentation of cross-domain medical images. Considering the domain gaps between different medical datasets, we propose the Model-Specific Alignment Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a customized Alignment Consistency Training (ACT) strategy is designed to promote the MSAN training. Considering the domain-invariant representative vectors in MSAN, we propose two generalizable knowledge distillation schemes for cross-domain distillation, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD). Specifically, in DCGD, two types of implicit contrastive graphs are designed to represent the intra-coupling and inter-coupling semantic correlations from the perspective of data distribution. In DICD, the domain-invariant semantic vectors from the two models (i.e., teacher and student) are leveraged to cross-reconstruct features by the header exchange of MSAN, which achieves improvement in the generalization of both the encoder and decoder in the student model. Furthermore, a metric named Frechet Semantic Distance (FSD) is tailored to verify the effectiveness of the regularized domain-invariant features. Extensive experiments conducted on the Liver and Retinal Vessel Segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization on lightweight frameworks.

Submitted: Jul 26, 2022