Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Enhancing UAV Security Through Zero Trust Architecture: An Advanced Deep Learning and Explainable AI Analysis
Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Sahabul Alam, Tariqul Islam
XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development
Zerui Wang, Yan Liu, Abishek Arumugam Thiruselvi, Abdelwahab Hamou-Lhadj
Red Teaming Models for Hyperspectral Image Analysis Using Explainable AI
Vladimir Zaigrajew, Hubert Baniecki, Lukasz Tulczyjew, Agata M. Wijata, Jakub Nalepa, Nicolas Longépé, Przemyslaw Biecek
Fast and Simple Explainability for Point Cloud Networks
Meir Yossef Levi, Guy Gilboa
XpertAI: uncovering model strategies for sub-manifolds
Simon Letzgus, Klaus-Robert Müller, Grégoire Montavon
A Survey of Explainable Knowledge Tracing
Yanhong Bai, Jiabao Zhao, Tingjiang Wei, Qing Cai, Liang He
An Explainable Transformer-based Model for Phishing Email Detection: A Large Language Model Approach
Mohammad Amaz Uddin, Iqbal H. Sarker
Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing
Adrian Höhl, Ivica Obadic, Miguel Ángel Fernández Torres, Hiba Najjar, Dario Oliveira, Zeynep Akata, Andreas Dengel, Xiao Xiang Zhu