Paper ID: 2312.08915
Attribute Regularized Soft Introspective Variational Autoencoder for Interpretable Cardiac Disease Classification
Maxime Di Folco, Cosmin I. Bercea, Julia A. Schnabel
Interpretability is essential in medical imaging to ensure that clinicians can comprehend and trust artificial intelligence models. In this paper, we propose a novel interpretable approach that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder. Comparative experiments on a cardiac MRI dataset demonstrate the ability of the proposed method to address blurry reconstruction issues of variational autoencoder methods and improve latent space interpretability. Additionally, our analysis of a downstream task reveals that the classification of cardiac disease using the regularized latent space heavily relies on attribute regularized dimensions, demonstrating great interpretability by connecting the used attributes for prediction with clinical observations.
Submitted: Dec 14, 2023