Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Can I trust my anomaly detection system? A case study based on explainable AI
Muhammad Rashid, Elvio Amparore, Enrico Ferrari, Damiano Verda
Monetizing Currency Pair Sentiments through LLM Explainability
Lior Limonad, Fabiana Fournier, Juan Manuel Vera Díaz, Inna Skarbovsky, Shlomit Gur, Raquel Lazcano
BEExAI: Benchmark to Evaluate Explainable AI
Samuel Sithakoul, Sara Meftah, Clément Feutry
Explaining Decisions in ML Models: a Parameterized Complexity Analysis
Sebastian Ordyniak, Giacomo Paesani, Mateusz Rychlicki, Stefan Szeider
The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis
Benjamin Fresz, Vincent Philipp Göbels, Safa Omri, Danilo Brajovic, Andreas Aichele, Janika Kutz, Jens Neuhüttler, Marco F. Huber