Paper ID: 2111.14260
A Practical guide on Explainable AI Techniques applied on Biomedical use case applications
Adrien Bennetot, Ivan Donadello, Ayoub El Qadi, Mauro Dragoni, Thomas Frossard, Benedikt Wagner, Anna Saranti, Silvia Tulli, Maria Trocan, Raja Chatila, Andreas Holzinger, Artur d'Avila Garcez, Natalia Díaz-Rodríguez
Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining detailed explanations of their behaviour. As opaque machine learning models are increasingly being employed to make important predictions in critical environments, the danger is to create and use decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing machine learning models with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This guide is meant to be the go-to handbook for any audience with a computer science background aiming at getting intuitive insights on machine learning models, accompanied with straight, fast, and intuitive explanations out of the box. This article aims to fill the lack of compelling XAI guide by applying XAI techniques in their particular day-to-day models, datasets and use-cases. Figure 1 acts as a flowchart/map for the reader and should help him to find the ideal method to use according to his type of data. In each chapter, the reader will find a description of the proposed method as well as an example of use on a Biomedical application and a Python notebook. It can be easily modified in order to be applied to specific applications.
Submitted: Nov 13, 2021