Paper ID: 2112.08968
Automated segmentation of 3-D body composition on computed tomography
Lucy Pu, Syed F. Ashraf, Naciye S Gezer, Iclal Ocak, Rajeev Dhupar
Purpose: To develop and validate a computer tool for automatic and simultaneous segmentation of body composition depicted on computed tomography (CT) scans for the following tissues: visceral adipose (VAT), subcutaneous adipose (SAT), intermuscular adipose (IMAT), skeletal muscle (SM), and bone. Approach: A cohort of 100 CT scans acquired from The Cancer Imaging Archive (TCIA) was used - 50 whole-body positron emission tomography (PET)-CTs, 25 chest, and 25 abdominal. Five different body compositions were manually annotated (VAT, SAT, IMAT, SM, and bone). A training-while-annotating strategy was used for efficiency. The UNet model was trained using the already annotated cases. Then, this model was used to enable semi-automatic annotation for the remaining cases. The 10-fold cross-validation method was used to develop and validate the performance of several convolutional neural networks (CNNs), including UNet, Recurrent Residual UNet (R2Unet), and UNet++. A 3-D patch sampling operation was used when training the CNN models. The separately trained CNN models were tested to see if they could achieve a better performance than segmenting them jointly. Paired-samples t-test was used to test for statistical significance. Results: Among the three CNN models, UNet demonstrated the best overall performance in jointly segmenting the five body compositions with a Dice coefficient of 0.840+/-0.091, 0.908+/-0.067, 0.603+/-0.084, 0.889+/-0.027, and 0.884+/-0.031, and a Jaccard index of 0.734+/-0.119, 0.837+/-0.096, 0.437+/-0.082, 0.800+/-0.042, 0.793+/-0.049, respectively for VAT, SAT, IMAT, SM, and bone. Conclusion: There were no significant differences among the CNN models in segmenting body composition, but jointly segmenting body compositions achieved a better performance than segmenting them separately.
Submitted: Dec 16, 2021