Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Human-AI Interaction in Industrial Robotics: Design and Empirical Evaluation of a User Interface for Explainable AI-Based Robot Program Optimization
Benjamin Alt, Johannes Zahn, Claudius Kienle, Julia Dvorak, Marvin May, Darko Katic, Rainer Jäkel, Tobias Kopp, Michael Beetz, Gisela Lanza
Reliable or Deceptive? Investigating Gated Features for Smooth Visual Explanations in CNNs
Soham Mitra, Atri Sukul, Swalpa Kumar Roy, Pravendra Singh, Vinay Verma
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
Automatic Extraction of Linguistic Description from Fuzzy Rule Base
Krzysztof Siminski, Konrad Wnuk
X-SHIELD: Regularization for eXplainable Artificial Intelligence
Iván Sevillano-García, Julián Luengo, Francisco Herrera
Using Explainable AI and Hierarchical Planning for Outreach with Robots
Rushang Karia, Jayesh Nagpal, Daksh Dobhal, Pulkit Verma, Rashmeet Kaur Nayyar, Naman Shah, Siddharth Srivastava
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen, Thanh Trung Huynh, Zhao Ren, Thanh Toan Nguyen, Phi Le Nguyen, Hongzhi Yin, Quoc Viet Hung Nguyen