Line by Line Explanation
Line-by-line explanation in AI focuses on providing understandable justifications for model predictions, aiming to improve trust, transparency, and user understanding. Current research investigates various explanation methods, including those based on feature importance, example-based approaches, and symbolic reasoning, often applied to models like neural networks, transformers, and large language models. This work is crucial for building reliable and trustworthy AI systems across diverse applications, from recruitment and healthcare to education and business, by addressing issues like misinformation and model bias through improved interpretability.
Papers
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci, Marco Gaido, Beatrice Savoldi, Matteo Negri, Mauro Cettolo, Luisa Bentivogli
Explaining and Improving Contrastive Decoding by Extrapolating the Probabilities of a Huge and Hypothetical LM
Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, Tagyoung Chung
SSET: Swapping-Sliding Explanation for Time Series Classifiers in Affect Detection
Nazanin Fouladgar, Marjan Alirezaie, Kary Främling
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting
Maxime Kayser, Bayar Menzat, Cornelius Emde, Bogdan Bercean, Alex Novak, Abdala Espinosa, Bartlomiej W. Papiez, Susanne Gaube, Thomas Lukasiewicz, Oana-Maria Camburu