High Explainability
High explainability in artificial intelligence (AI) aims to make the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable. Current research focuses on developing both intrinsic (built-in) and post-hoc (added after training) explainability methods, often employing techniques like attention mechanisms, feature attribution, and counterfactual examples to interpret model outputs across various modalities (text, images, audio). This pursuit is crucial for building trust in AI systems, particularly in high-stakes domains like medicine and finance, and for ensuring fairness, accountability, and responsible AI development.
Papers
GUIDEQ: Framework for Guided Questioning for progressive informational collection and classification
Priya Mishra, Suraj Racha, Kaustubh Ponkshe, Adit Akarsh, Ganesh Ramakrishnan
Visual-TCAV: Concept-based Attribution and Saliency Maps for Post-hoc Explainability in Image Classification
Antonio De Santis, Riccardo Campi, Matteo Bianchi, Marco Brambilla
Generative Example-Based Explanations: Bridging the Gap between Generative Modeling and Explainability
Philipp Vaeth, Alexander M. Fruehwald, Benjamin Paassen, Magda Gregorova
Explainability in AI Based Applications: A Framework for Comparing Different Techniques
Arne Grobrugge, Nidhi Mishra, Johannes Jakubik, Gerhard Satzger
Neuropsychology and Explainability of AI: A Distributional Approach to the Relationship Between Activation Similarity of Neural Categories in Synthetic Cognition
Michael Pichat, Enola Campoli, William Pogrund, Jourdan Wilson, Michael Veillet-Guillem, Anton Melkozerov, Paloma Pichat, Armanush Gasparian, Samuel Demarchi, Judicael Poumay
An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems
Shruthi Chari
User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study
Szymon Bobek, Paloma Korycińska, Monika Krakowska, Maciej Mozolewski, Dorota Rak, Magdalena Zych, Magdalena Wójcik, Grzegorz J. Nalepa
XAI-FUNGI: Dataset resulting from the user study on comprehensibility of explainable AI algorithms
Szymon Bobek, Paloma Korycińska, Monika Krakowska, Maciej Mozolewski, Dorota Rak, Magdalena Zych, Magdalena Wójcik, Grzegorz J. Nalepa
Explainability of Highly Associated Fuzzy Churn Patterns in Binary Classification
D.Y.C. Wang, Lars Arne Jordanger, Jerry Chun-Wei Lin