Paper ID: 2211.07190
TriDoNet: A Triple Domain Model-driven Network for CT Metal Artifact Reduction
Baoshun Shi, Ke Jiang, Shaolei Zhang, Qiusheng Lian, Yanwei Qin
Recent deep learning-based methods have achieved promising performance for computed tomography metal artifact reduction (CTMAR). However, most of them suffer from two limitations: (i) the domain knowledge is not fully embedded into the network training; (ii) metal artifacts lack effective representation models. The aforementioned limitations leave room for further performance improvement. Against these issues, we propose a novel triple domain model-driven CTMAR network, termed as TriDoNet, whose network training exploits triple domain knowledge, i.e., the knowledge of the sinogram, CT image, and metal artifact domains. Specifically, to explore the non-local repetitive streaking patterns of metal artifacts, we encode them as an explicit tight frame sparse representation model with adaptive thresholds. Furthermore, we design a contrastive regularization (CR) built upon contrastive learning to exploit clean CT images and metal-affected images as positive and negative samples, respectively. Experimental results show that our TriDoNet can generate superior artifact-reduced CT images.
Submitted: Nov 14, 2022