Line by Line Explanation
Line-by-line explanation in AI focuses on providing understandable justifications for model predictions, aiming to improve trust, transparency, and user understanding. Current research investigates various explanation methods, including those based on feature importance, example-based approaches, and symbolic reasoning, often applied to models like neural networks, transformers, and large language models. This work is crucial for building reliable and trustworthy AI systems across diverse applications, from recruitment and healthcare to education and business, by addressing issues like misinformation and model bias through improved interpretability.
Papers
Creating Healthy Friction: Determining Stakeholder Requirements of Job Recommendation Explanations
Roan Schellingerhout, Francesco Barile, Nava Tintarev
Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making
Min Hun Lee, Renee Bao Xuan Ng, Silvana Xinyi Choo, Shamala Thilarajah
Exploring the Effect of Explanation Content and Format on User Comprehension and Trust
Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni
Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features
Thomas Schnake, Farnoush Rezaei Jafari, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller