Paper ID: 2403.13113

Quantifying uncertainty in lung cancer segmentation with foundation models applied to mixed-domain datasets

Aneesh Rangnekar, Nishant Nadkarni, Jue Jiang, Harini Veeraraghavan

Medical image foundation models have shown the ability to segment organs and tumors with minimal fine-tuning. These models are typically evaluated on task-specific in-distribution (ID) datasets. However, reliable performance on ID dataset does not guarantee robust generalization on out-of-distribution (OOD) datasets. Importantly, once deployed for clinical use, it is impractical to have `ground truth' delineations to assess ongoing performance drifts, especially when images fall into OOD category due to different imaging protocols. Hence, we introduced a comprehensive set of computationally fast metrics to evaluate the performance of multiple foundation models (Swin UNETR, SimMIM, iBOT, SMIT) trained with self-supervised learning (SSL). SSL pretraining was selected as this approach is applicable for large, diverse, and unlabeled image sets. All models were fine-tuned on identical datasets for lung tumor segmentation from computed tomography (CT) scans. SimMIM, iBOT, and SMIT used identical architecture, pretraining, and fine-tuning datasets to assess performance variations with the choice of pretext tasks used in SSL. Evaluation was performed on two public lung cancer datasets (LRAD: n = 140, 5Rater: n = 21) with different image acquisitions and tumor stage compared to training data (n = 317 public resource with stage III-IV lung cancers) and a public non-cancer dataset containing volumetric CT scans of patients with pulmonary embolism (n = 120). All models produced similarly accurate tumor segmentation on the lung cancer testing datasets. SMIT produced a highest F1-score (LRAD: 0.60, 5Rater: 0.64) and lowest entropy (LRAD: 0.06, 5Rater: 0.12), indicating higher tumor detection rate and confident segmentations. In the OOD dataset, SMIT misdetected least number of tumors, indicated by median volume occupancy of 5.67 cc compared to second best method SimMIM of 9.97 cc.

Submitted: Mar 19, 2024