Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective
Katharina Weitz, Chi Tai Dang, Elisabeth André
Utilizing Explainable AI for improving the Performance of Neural Networks
Huawei Sun, Lorenzo Servadei, Hao Feng, Michael Stephan, Robert Wille, Avik Santra
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour, Arghasree Banerjee, Matthew Guzdial
Explainable Artificial Intelligence to Detect Image Spam Using Convolutional Neural Network
Zhibo Zhang, Ernesto Damiani, Hussam Al Hamadi, Chan Yeob Yeun, Fatma Taher