High Explainability
High explainability in artificial intelligence (AI) aims to make the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable. Current research focuses on developing both intrinsic (built-in) and post-hoc (added after training) explainability methods, often employing techniques like attention mechanisms, feature attribution, and counterfactual examples to interpret model outputs across various modalities (text, images, audio). This pursuit is crucial for building trust in AI systems, particularly in high-stakes domains like medicine and finance, and for ensuring fairness, accountability, and responsible AI development.
Papers
Explainability for Machine Learning Models: From Data Adaptability to User Perception
julien Delaunay
RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model
Jianhao Yuan, Shuyang Sun, Daniel Omeiza, Bo Zhao, Paul Newman, Lars Kunze, Matthew Gadd
On Explaining Unfairness: An Overview
Christos Fragkathoulas, Vasiliki Papanikou, Danae Pla Karidi, Evaggelia Pitoura
Towards Uncovering How Large Language Model Works: An Explainability Perspective
Haiyan Zhao, Fan Yang, Bo Shen, Himabindu Lakkaraju, Mengnan Du
Abstracted Trajectory Visualization for Explainability in Reinforcement Learning
Yoshiki Takagi, Roderick Tabalba, Nurit Kirshenbaum, Jason Leigh
SIDU-TXT: An XAI Algorithm for NLP with a Holistic Assessment Approach
Mohammad N. S. Jahromi, Satya. M. Muddamsetty, Asta Sofie Stage Jarlner, Anna Murphy Høgenhaug, Thomas Gammeltoft-Hansen, Thomas B. Moeslund
EXGC: Bridging Efficiency and Explainability in Graph Condensation
Junfeng Fang, Xinglin Li, Yongduo Sui, Yuan Gao, Guibin Zhang, Kun Wang, Xiang Wang, Xiangnan He
How Good is ChatGPT at Face Biometrics? A First Look into Recognition, Soft Biometrics, and Explainability
Ivan DeAndres-Tame, Ruben Tolosana, Ruben Vera-Rodriguez, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Timothée Schmude, Laura Koesten, Torsten Möller, Sebastian Tschiatschek