Paper ID: 2112.02508
Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised Medical Image Segmentation
Yichi Zhang, Rushi Jiao, Qingcheng Liao, Dongyang Li, Jicong Zhang
Medical image segmentation is a fundamental and critical step in many clinical approaches. Semi-supervised learning has been widely applied to medical image segmentation tasks since it alleviates the heavy burden of acquiring expert-examined annotations and takes the advantage of unlabeled data which is much easier to acquire. Although consistency learning has been proven to be an effective approach by enforcing an invariance of predictions under different distributions, existing approaches cannot make full use of region-level shape constraint and boundary-level distance information from unlabeled data. In this paper, we propose a novel uncertainty-guided mutual consistency learning framework to effectively exploit unlabeled data by integrating intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information. The framework is guided by the estimated segmentation uncertainty of models to select out relatively certain predictions for consistency learning, so as to effectively exploit more reliable information from unlabeled data. Experiments on two publicly available benchmark datasets showed that: 1) Our proposed method can achieve significant performance improvement by leveraging unlabeled data, with up to 4.13% and 9.82% in Dice coefficient compared to supervised baseline on left atrium segmentation and brain tumor segmentation, respectively. 2) Compared with other semi-supervised segmentation methods, our proposed method achieve better segmentation performance under the same backbone network and task settings on both datasets, demonstrating the effectiveness and robustness of our method and potential transferability for other medical image segmentation tasks.
Submitted: Dec 5, 2021