Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) aims to make the decision-making processes of complex AI models more transparent and understandable, addressing concerns about trust and accountability, particularly in high-stakes applications like healthcare and finance. Current research focuses on developing and evaluating various explanation methods, including those based on feature attribution (e.g., SHAP, LIME), prototype generation, and counterfactual examples, often applied to deep neural networks and other machine learning models. The ultimate goal is to improve the reliability and usability of AI systems by providing insights into their predictions and enhancing human-AI collaboration.
Papers
Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques
Samita Bai, Sidra Nasir, Rizwan Ahmed Khan, Sheeraz Arif, Alexandre Meyer, Hubert Konik
Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion Recognition
Alaa Nfissi, Wassim Bouachir, Nizar Bouguila, Brian Mishara
Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI
Anjana Wijekoon, David Corsar, Nirmalie Wiratunga, Kyle Martin, Pedram Salimi
Solving the enigma: Deriving optimal explanations of deep networks
Michail Mamalakis, Antonios Mamalakis, Ingrid Agartz, Lynn Egeland Mørch-Johnsen, Graham Murray, John Suckling, Pietro Lio
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa, Fábio F. Dias, Brian Barr, Claudio T. Silva, Luis Gustavo Nonato
Fiper: a Visual-based Explanation Combining Rules and Feature Importance
Eleonora Cappuccio, Daniele Fadda, Rosa Lanzilotti, Salvatore Rinzivillo
Transparent AI: Developing an Explainable Interface for Predicting Postoperative Complications
Yuanfang Ren, Chirayu Tripathi, Ziyuan Guan, Ruilin Zhu, Victoria Hougha, Yingbo Ma, Zhenhong Hu, Jeremy Balch, Tyler J. Loftus, Parisa Rashidi, Benjamin Shickel, Tezcan Ozrazgat-Baslanti, Azra Bihorac
Concept Induction using LLMs: a user experiment for assessment
Adrita Barua, Cara Widmer, Pascal Hitzler