Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations
Maximilian Dreyer, Reduan Achtibat, Wojciech Samek, Sebastian Lapuschkin
Elucidating Discrepancy in Explanations of Predictive Models Developed using EMR
Aida Brankovic, Wenjie Huang, David Cook, Sankalp Khanna, Konstanty Bialkowski
Predicting recovery following stroke: deep learning, multimodal data and feature selection using explainable AI
Adam White, Margarita Saranti, Artur d'Avila Garcez, Thomas M. H. Hope, Cathy J. Price, Howard Bowman
CrossEAI: Using Explainable AI to generate better bounding boxes for Chest X-ray images
Jinze Zhao
Parcel loss prediction in last-mile delivery: deep and non-deep approaches with insights from Explainable AI
Jan de Leeuw, Zaharah Bukhsh, Yingqian Zhang
On the stability, correctness and plausibility of visual explanation methods based on feature importance
Romain Xu-Darme, Jenny Benois-Pineau, Romain Giot, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset, Alexey Zhukov