Paper ID: 2203.00238
Uncertainty categories in medical image segmentation: a study of source-related diversity
Luke Whitbread, Mark Jenkinson
Measuring uncertainties in the output of a deep learning method is useful in several ways, such as in assisting with interpretation of the outputs, helping build confidence with end users, and for improving the training and performance of the networks. Several different methods have been proposed to estimate uncertainties, including those from epistemic (relating to the model used) and aleatoric (relating to the data) sources using test-time dropout and augmentation, respectively. Not only are these uncertainty sources different, but they are governed by parameter settings (e.g., dropout rate or type and level of augmentation) that establish even more distinct uncertainty categories. This work investigates how different the uncertainties are from these categories, for magnitude and spatial pattern, to empirically address the question of whether they provide usefully distinct information that should be captured whenever uncertainties are used. We take the well characterised BraTS challenge dataset to demonstrate that there are substantial differences in both magnitude and spatial pattern of uncertainties from the different categories, and discuss the implications of these in various use cases.
Submitted: Mar 1, 2022