Line by Line Explanation
Line-by-line explanation in AI focuses on providing understandable justifications for model predictions, aiming to improve trust, transparency, and user understanding. Current research investigates various explanation methods, including those based on feature importance, example-based approaches, and symbolic reasoning, often applied to models like neural networks, transformers, and large language models. This work is crucial for building reliable and trustworthy AI systems across diverse applications, from recruitment and healthcare to education and business, by addressing issues like misinformation and model bias through improved interpretability.
Papers
Explaining Deep Learning-based Anomaly Detection in Energy Consumption Data by Focusing on Contextually Relevant Data
Mohammad Noorchenarboo, Katarina Grolinger
Explaining k-Nearest Neighbors: Abductive and Counterfactual Explanations
Pablo Barceló, Alexander Kozachinskiy, Miguel Romero Orth, Bernardo Subercaseaux, José Verschae
COMIX: Compositional Explanations using Prototypes
Sarath Sivaprasad, Dmitry Kangin, Plamen Angelov, Mario Fritz
Watermarking Graph Neural Networks via Explanations for Ownership Protection
Jane Downer, Ren Wang, Binghui Wang
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
Laura State, Alejandra Bringas Colmenarejo, Andrea Beretta, Salvatore Ruggieri, Franco Turini, Stephanie Law
FarExStance: Explainable Stance Detection for Farsi
Majid Zarharan, Maryam Hashemi, Malika Behroozrazegh, Sauleh Eetemadi, Mohammad Taher Pilehvar, Jennifer Foster
A Rose by Any Other Name: LLM-Generated Explanations Are Good Proxies for Human Explanations to Collect Label Distributions on NLI
Beiduo Chen, Siyao Peng, Anna Korhonen, Barbara Plank
Unifying Attribution-Based Explanations Using Functional Decomposition
Arne Gevaert, Yvan Saeys