Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent
Van Bach Nguyen, Jörg Schlötterer, Christin Seifert
"Mama Always Had a Way of Explaining Things So I Could Understand'': A Dialogue Corpus for Learning to Construct Explanations
Henning Wachsmuth, Milad Alshomary
Explainable AI for tailored electricity consumption feedback -- an experimental evaluation of visualizations
Jacqueline Wastensteiner, Tobias M. Weiss, Felix Haag, Konstantin Hopf
Augmented cross-selling through explainable AI -- a case from energy retailing
Felix Haag, Konstantin Hopf, Pedro Menelau Vasconcelos, Thorsten Staake
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto, Tiago Gonçalves, João Ribeiro Pinto, Wilson Silva, Ana F. Sequeira, Arun Ross, Jaime S. Cardoso
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Wei Huang, Xingyu Zhao, Gaojie Jin, Xiaowei Huang
Diagnosis of Paratuberculosis in Histopathological Images Based on Explainable Artificial Intelligence and Deep Learning
Tuncay Yiğit, Nilgün Şengöz, Özlem Özmen, Jude Hemanth, Ali Hakan Işık
ferret: a Framework for Benchmarking Explainers on Transformers
Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, Debora Nozza