Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis
Prithwijit Chowdhury, Mohit Prabhushankar, Ghassan AlRegib, Mohamed Deriche
Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems
Ryan Zhou, Jaume Bacardit, Alexander Brownlee, Stefano Cagnoni, Martin Fyvie, Giovanni Iacca, John McCall, Niki van Stein, David Walker, Ting Hu
Enhancing Explainable AI: A Hybrid Approach Combining GradCAM and LRP for CNN Interpretability
Vaibhav Dhore, Achintya Bhat, Viraj Nerlekar, Kashyap Chavhan, Aniket Umare
EXACT: Towards a platform for empirically benchmarking Machine Learning model explanation methods
Benedict Clark, Rick Wilming, Artur Dox, Paul Eschenbach, Sami Hached, Daniel Jin Wodke, Michias Taye Zewdie, Uladzislau Bruila, Marta Oliveira, Hjalmar Schulz, Luca Matteo Cornils, Danny Panknin, Ahcène Boubekki, Stefan Haufe
Improving the Explain-Any-Concept by Introducing Nonlinearity to the Trainable Surrogate Model
Mounes Zaval, Sedat Ozer