Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
An Explainable Transformer-based Model for Phishing Email Detection: A Large Language Model Approach
Mohammad Amaz Uddin, Iqbal H. Sarker
Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing
Adrian Höhl, Ivica Obadic, Miguel Ángel Fernández Torres, Hiba Najjar, Dario Oliveira, Zeynep Akata, Andreas Dengel, Xiao Xiang Zhu
Automated detection of motion artifacts in brain MR images using deep learning and explainable artificial intelligence
Marina Manso Jimeno, Keerthi Sravan Ravi, Maggie Fung, John Thomas Vaughan,, Sairam Geethanath
A Logical Approach to Criminal Case Investigation
Takanori Ugai, Yusuke Koyanagi, Fumihito Nishino
Abstracted Trajectory Visualization for Explainability in Reinforcement Learning
Yoshiki Takagi, Roderick Tabalba, Nurit Kirshenbaum, Jason Leigh
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
Mohammad N. S. Jahromi, Satya. M. Muddamsetty, Asta Sofie Stage Jarlner, Anna Murphy Høgenhaug, Thomas Gammeltoft-Hansen, Thomas B. Moeslund
Explaining Predictive Uncertainty by Exposing Second-Order Effects
Florian Bley, Sebastian Lapuschkin, Wojciech Samek, Grégoire Montavon
Explainable AI for survival analysis: a median-SHAP approach
Lucile Ter-Minassian, Sahra Ghalebikesabi, Karla Diaz-Ordaz, Chris Holmes
NormEnsembleXAI: Unveiling the Strengths and Weaknesses of XAI Ensemble Techniques
Weronika Hryniewska-Guzik, Bartosz Sawicki, Przemysław Biecek
XAI for All: Can Large Language Models Simplify Explainable AI?
Philip Mavrepis, Georgios Makridis, Georgios Fatouros, Vasileios Koukos, Maria Margarita Separdani, Dimosthenis Kyriazis
LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations
Qianli Wang, Tatiana Anikina, Nils Feldhus, Josef van Genabith, Leonhard Hennig, Sebastian Möller