Paper ID: 2306.11107
A Lightweight Generative Model for Interpretable Subject-level Prediction
Chiara Mauri, Stefano Cerri, Oula Puonti, Mark Mühlau, Koen Van Leemput
Recent years have seen a growing interest in methods for predicting an unknown variable of interest, such as a subject's diagnosis, from medical images depicting its anatomical-functional effects. Methods based on discriminative modeling excel at making accurate predictions, but are challenged in their ability to explain their decisions in anatomically meaningful terms. In this paper, we propose a simple technique for single-subject prediction that is inherently interpretable. It augments the generative models used in classical human brain mapping techniques, in which the underlying cause-effect relations can be encoded, with a multivariate noise model that captures dominant spatial correlations. Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions, while at the same time offering intuitive visual explanations of its inner workings. The method is easy to use: training is fast for typical training set sizes, and only a single hyperparameter needs to be set by the user. Our code is available at https://github.com/chiara-mauri/Interpretable-subject-level-prediction.
Submitted: Jun 19, 2023