Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Utilizing Explainable AI for improving the Performance of Neural Networks
Huawei Sun, Lorenzo Servadei, Hao Feng, Michael Stephan, Robert Wille, Avik Santra
Explainable AI based Glaucoma Detection using Transfer Learning and LIME
Touhidul Islam Chayan, Anita Islam, Eftykhar Rahman, Md. Tanzim Reza, Tasnim Sakib Apon, MD. Golam Rabiul Alam
Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent
Van Bach Nguyen, Jörg Schlötterer, Christin Seifert
"Mama Always Had a Way of Explaining Things So I Could Understand'': A Dialogue Corpus for Learning to Construct Explanations
Henning Wachsmuth, Milad Alshomary
Explainable AI for tailored electricity consumption feedback -- an experimental evaluation of visualizations
Jacqueline Wastensteiner, Tobias M. Weiss, Felix Haag, Konstantin Hopf
Augmented cross-selling through explainable AI -- a case from energy retailing
Felix Haag, Konstantin Hopf, Pedro Menelau Vasconcelos, Thorsten Staake