Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Formally Explaining Neural Networks within Reactive Systems
Shahaf Bassan, Guy Amir, Davide Corsi, Idan Refaeli, Guy Katz
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model
Khushnaseeb Roshan, Aasim Zafar
Towards a Comprehensive Human-Centred Evaluation Framework for Explainable AI
Ivania Donoso-Guzmán, Jeroen Ooge, Denis Parra, Katrien Verbert
Identifying drivers and mitigators for congestion and redispatch in the German electric power system with explainable AI
Maurizio Titz, Sebastian Pütz, Dirk Witthaut
Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models
Patrik Hammersborg, Inga Strümke