Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Embodied Exploration of Latent Spaces and Explainable AI
Elizabeth Wilson, Mika Satomi, Alex McLean, Deva Schubert, Juan Felipe Amaya Gonzalez
Explainable AI in Handwriting Detection for Dyslexia Using Transfer Learning
Mahmoud Robaa, Mazen Balat, Rewaa Awaad, Esraa Omar, Salah A. Aly
Formal Explanations for Neuro-Symbolic AI
Sushmita Paul, Jinqiang Yu, Jip J. Dekker, Alexey Ignatiev, Peter J. Stuckey
XForecast: Evaluating Natural Language Explanations for Time Series Forecasting
Taha Aksu, Chenghao Liu, Amrita Saha, Sarah Tan, Caiming Xiong, Doyen Sahoo
TABCF: Counterfactual Explanations for Tabular Data Using a Transformer-Based VAE
Emmanouil Panagiotou, Manuel Heurich, Tim Landgraf, Eirini Ntoutsi
LG-CAV: Train Any Concept Activation Vector with Language Guidance
Qihan Huang, Jie Song, Mengqi Xue, Haofei Zhang, Bingde Hu, Huiqiong Wang, Hao Jiang, Xingen Wang, Mingli Song
Precision Cancer Classification and Biomarker Identification from mRNA Gene Expression via Dimensionality Reduction and Explainable AI
Farzana Tabassum, Sabrina Islam, Siana Rizwan, Masrur Sobhan, Tasnim Ahmed, Sabbir Ahmed, Tareque Mohmud Chowdhury
Demonstration Based Explainable AI for Learning from Demonstration Methods
Morris Gu, Elizabeth Croft, Dana Kulic
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng, Farhad Shirani, Zhuomin Chen, Chaohao Lin, Wei Cheng, Wenbo Guo, Dongsheng Luo
Ethio-Fake: Cutting-Edge Approaches to Combat Fake News in Under-Resourced Languages Using Explainable AI
Mesay Gemeda Yigezu, Melkamu Abay Mersha, Girma Yohannis Bade, Jugal Kalita, Olga Kolesnikova, Alexander Gelbukh
Explaining Explaining
Sergei Nirenburg, Marjorie McShane, Kenneth W. Goodman, Sanjay Oruganti
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna, Niladri Sett
A novel application of Shapley values for large multidimensional time-series data: Applying explainable AI to a DNA profile classification neural network
Lauren Elborough, Duncan Taylor, Melissa Humphries
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen, Johannes Bertram, Evgenii Kortukov, Jean Y. Song, Seong Joon Oh
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein, Carsten T. Lüth, Udo Schlegel, Till J. Bungert, Mennatallah El-Assady, Paul F. Jäger