XAI Method
Explainable AI (XAI) methods aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing robust evaluation frameworks for existing XAI techniques, including those based on feature attribution, surrogate models, and concept-based explanations, and addressing challenges like the generation of out-of-distribution samples and the impact of multicollinearity. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications like healthcare and finance, where interpretability and accountability are paramount. The development of standardized evaluation metrics and the exploration of user-centric approaches are key areas of ongoing investigation.
Papers
GraphXAIN: Narratives to Explain Graph Neural Networks
Mateusz Cedro, David Martens
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak, Steve Azzolin, Eloïse Berthier, Anna Hedström, Patricia Delhomme, Nicolas Bousquet, Goran Frehse, Massimiliano Mancini, Baptiste Caramiaux, Andrea Passerini, Gianni Franchi
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Szymon Bobek, Paloma Korycińska, Monika Krakowska, Maciej Mozolewski, Dorota Rak, Magdalena Zych, Magdalena Wójcik, Grzegorz J. Nalepa
XAI-FUNGI: Dataset resulting from the user study on comprehensibility of explainable AI algorithms
Szymon Bobek, Paloma Korycińska, Monika Krakowska, Maciej Mozolewski, Dorota Rak, Magdalena Zych, Magdalena Wójcik, Grzegorz J. Nalepa
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita
Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features
Thomas Schnake, Farnoush Rezaei Jafari, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller