Paper ID: 2410.00250

A Methodology for Explainable Large Language Models with Integrated Gradients and Linguistic Analysis in Text Classification

Marina Ribeiro (1 and 2), Bárbara Malcorra (2), Natália B. Mota (2 and 3), Rodrigo Wilkens (4 and 5), Aline Villavicencio (5 and 6)Lilian C. Hubner (7), César Rennó-Costa (1) ((1) Bioinformatics Multidisciplinary Environment (BioME), Digital Metropolis Institute (IMD), Federal University of Rio Grande do Norte (UFRN), Natal (RN), Brazil, (2) Research Department at Mobile Brain, Mobile Brain, Rio de Janeiro (RJ), Brazil, (3) Institute of Psychiatry (IPUB), Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro (RJ), Brazil, (4) Department of Computer Science, The University of Exeter, Exeter, UK, (5) Institute for Data Science and Artificial Intelligence at the University of Exeter, Exeter, UK, (6) Department of Computer Science, The University of Sheffield, Sheffield, UK, (7) School of Humanities, Pontifical Catholic University of Rio Grande do Sul (PUCRS), Porto Alegre (RS), Brazil)

Neurological disorders that affect speech production, such as Alzheimer's Disease (AD), significantly impact the lives of both patients and caregivers, whether through social, psycho-emotional effects or other aspects not yet fully understood. Recent advancements in Large Language Model (LLM) architectures have developed many tools to identify representative features of neurological disorders through spontaneous speech. However, LLMs typically lack interpretability, meaning they do not provide clear and specific reasons for their decisions. Therefore, there is a need for methods capable of identifying the representative features of neurological disorders in speech and explaining clearly why these features are relevant. This paper presents an explainable LLM method, named SLIME (Statistical and Linguistic Insights for Model Explanation), capable of identifying lexical components representative of AD and indicating which components are most important for the LLM's decision. In developing this method, we used an English-language dataset consisting of transcriptions from the Cookie Theft picture description task. The LLM Bidirectional Encoder Representations from Transformers (BERT) classified the textual descriptions as either AD or control groups. To identify representative lexical features and determine which are most relevant to the model's decision, we used a pipeline involving Integrated Gradients (IG), Linguistic Inquiry and Word Count (LIWC), and statistical analysis. Our method demonstrates that BERT leverages lexical components that reflect a reduction in social references in AD and identifies which further improve the LLM's accuracy. Thus, we provide an explainability tool that enhances confidence in applying LLMs to neurological clinical contexts, particularly in the study of neurodegeneration.

Submitted: Sep 30, 2024