Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
Automatic Extraction of Linguistic Description from Fuzzy Rule Base
Krzysztof Siminski, Konrad Wnuk
SHIELD: A regularization technique for eXplainable Artificial Intelligence
Iván Sevillano-García, Julián Luengo, Francisco Herrera
Using Explainable AI and Hierarchical Planning for Outreach with Robots
Daksh Dobhal, Jayesh Nagpal, Rushang Karia, Pulkit Verma, Rashmeet Kaur Nayyar, Naman Shah, Siddharth Srivastava
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen, Thanh Trung Huynh, Zhao Ren, Thanh Toan Nguyen, Phi Le Nguyen, Hongzhi Yin, Quoc Viet Hung Nguyen
Enhancing UAV Security Through Zero Trust Architecture: An Advanced Deep Learning and Explainable AI Analysis
Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Sahabul Alam, Tariqul Islam
XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development
Zerui Wang, Yan Liu, Abishek Arumugam Thiruselvi, Abdelwahab Hamou-Lhadj
Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI
Vladimir Zaigrajew, Hubert Baniecki, Lukasz Tulczyjew, Agata M. Wijata, Jakub Nalepa, Nicolas Longépé, Przemyslaw Biecek
Fast and Simple Explainability for Point Cloud Networks
Meir Yossef Levi, Guy Gilboa
XpertAI: uncovering model strategies for sub-manifolds
Simon Letzgus, Klaus-Robert Müller, Grégoire Montavon
A Survey of Explainable Knowledge Tracing
Yanhong Bai, Jiabao Zhao, Tingjiang Wei, Qing Cai, Liang He