Paper ID: 2309.00903
An explainable three dimension framework to uncover learning patterns: A unified look in variable sulci recognition
Michail Mamalakis, Heloise de Vareilles, Atheer AI-Manea, Samantha C. Mitchell, Ingrid Arartz, Lynn Egeland Morch-Johnsen, Jane Garrison, Jon Simons, Pietro Lio, John Suckling, Graham Murray
Detecting the significant features of the learning process of an artificial intelligence framework in the entire training and validation dataset can be determined as 'global' explanations. Studies in the literature lack of accurate, low-complexity, and three-dimensional (3D) global explanations which are crucial in neuroimaging, a field with a complex representational space that demands more than basic two-dimensional interpretations. To fill this gap, we developed a novel explainable artificial intelligence (XAI) 3D-Framework that provides robust, faithful, and low-complexity global explanations. We evaluated our framework on various 3D deep learning networks trained, validated, and tested on a well-annotated cohort of 596 subjects from the TOP-OSLO study. The focus was on the presence and absence of the paracingulate sulcus, a variable feature of brain morphology correlated with psychotic conditions. Our proposed 3D-Framework outperforms traditional XAI methods in terms of faithfulness for global explanations. As a result, we were able to use these robust explanations to uncover new patterns that not only enhance the credibility and reliability of the training process but also reveal promising new biomarkers and significantly related sub-regions. For the first time, our developed 3D-Framework proposes a way for the scientific community to utilize global explanations to discover novel patterns in this specific neuroscientific application and beyond. This study can helps improve the trustworthiness of AI training processes and push the boundaries of our understanding by revealing new patterns in neuroscience and beyond.
Submitted: Sep 2, 2023