Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
From Images to Insights: Transforming Brain Cancer Diagnosis with Explainable AI
Md. Arafat Alam Khandaker, Ziyan Shirin Raha, Salehin Bin Iqbal, M.F. Mridha, Jungpil Shin
Integrating Explainable AI for Effective Malware Detection in Encrypted Network Traffic
Sileshi Nibret Zeleke, Amsalu Fentie Jember, Mario Bochicchio
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
Laura State, Alejandra Bringas Colmenarejo, Andrea Beretta, Salvatore Ruggieri, Franco Turini, Stephanie Law
Explainable AI based System for Supply Air Temperature Forecast
Marika Eik, Ahmet Kose, Hossein Nourollahi Hokmabad, Juri Belikov
Leveraging Explainable AI for LLM Text Attribution: Differentiating Human-Written and Multiple LLMs-Generated Text
Ayat Najjar, Huthaifa I. Ashqar, Omar Darwish, Eman Hammad
Detecting AI-Generated Text in Educational Content: Leveraging Machine Learning and Explainable AI for Academic Integrity
Ayat A. Najjar, Huthaifa I. Ashqar, Omar A. Darwish, Eman Hammad
Improving Robustness Estimates in Natural Language Explainable AI though Synonymity Weighted Similarity Measures
Christopher Burger
ProjectedEx: Enhancing Generation in Explainable AI for Prostate Cancer
Xuyin Qi, Zeyu Zhang, Aaron Berliano Handoko, Huazhan Zheng, Mingxi Chen, Ta Duc Huy, Vu Minh Hieu Phan, Lei Zhang, Linqi Cheng, Shiyu Jiang, Zhiwei Zhang, Zhibin Liao, Yang Zhao, Minh-Son To
Explainable AI for Multivariate Time Series Pattern Exploration: Latent Space Visual Analytics with Time Fusion Transformer and Variational Autoencoders in Power Grid Event Diagnosis
Haowen Xu, Ali Boyaci, Jianming Lian, Aaron Wilson
Choose Your Explanation: A Comparison of SHAP and GradCAM in Human Activity Recognition
Felix Tempel, Daniel Groos, Espen Alexander F. Ihlen, Lars Adde, Inga Strümke
Critique of Impure Reason: Unveiling the reasoning behaviour of medical Large Language Models
Shamus Sim, Tyrone Chen
Extracting PAC Decision Trees from Black Box Binary Classifiers: The Gender Bias Study Case on BERT-based Language Models
Ana Ozaki, Roberto Confalonieri, Ricardo Guimarães, Anders Imenes
Adopting Explainable-AI to investigate the impact of urban morphology design on energy and environmental performance in dry-arid climates
Pegah Eshraghi, Riccardo Talami, Arman Nikkhah Dehnavi, Maedeh Mirdamadi, Zahra-Sadat Zomorodian