Paper ID: 2405.12222

Influence based explainability of brain tumors segmentation in multimodal Magnetic Resonance Imaging

Tommaso Torda, Andrea Ciardiello, Simona Gargiulo, Greta Grillo, Simone Scardapane, Cecilia Voena, Stefano Giagu

In recent years Artificial Intelligence has emerged as a fundamental tool in medical applications. Despite this rapid development, deep neural networks remain black boxes that are difficult to explain, and this represents a major limitation for their use in clinical practice. We focus on the segmentation of medical images task, where most explainability methods proposed so far provide a visual explanation in terms of an input saliency map. The aim of this work is to extend, implement and test instead an influence-based explainability algorithm, TracIn, proposed originally for classification tasks, in a challenging clinical problem, i.e., multiclass segmentation of tumor brains in multimodal Magnetic Resonance Imaging. We verify the faithfulness of the proposed algorithm linking the similarities of the latent representation of the network to the TracIn output. We further test the capacity of the algorithm to provide local and global explanations, and we suggest that it can be adopted as a tool to select the most relevant features used in the decision process. The method is generalizable for all semantic segmentation tasks where classes are mutually exclusive, which is the standard framework in these cases.

Submitted: Apr 5, 2024