High Explainability
High explainability in artificial intelligence (AI) aims to make the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable. Current research focuses on developing both intrinsic (built-in) and post-hoc (added after training) explainability methods, often employing techniques like attention mechanisms, feature attribution, and counterfactual examples to interpret model outputs across various modalities (text, images, audio). This pursuit is crucial for building trust in AI systems, particularly in high-stakes domains like medicine and finance, and for ensuring fairness, accountability, and responsible AI development.
Papers
Explainable History Distillation by Marked Temporal Point Process
Sishun Liu, Ke Deng, Yan Wang, Xiuzhen Zhang
A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods
Branka Hadji Misheva, Joerg Osterrieder
Explainable Boosting Machines with Sparsity -- Maintaining Explainability in High-Dimensional Settings
Brandon M. Greenwell, Annika Dahlmann, Saurabh Dhoble
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations
Koorosh Aslansefat, Mojgan Hashemian, Martin Walker, Mohammed Naveed Akram, Ioannis Sorokos, Yiannis Papadopoulos
On the Interplay between Fairness and Explainability
Stephanie Brandl, Emanuele Bugliarello, Ilias Chalkidis
Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models
Oren Barkan, Yuval Asher, Amit Eshel, Yehonatan Elisha, Noam Koenigstein
Towards Explainability in Monocular Depth Estimation
Vasileios Arampatzakis, George Pavlidis, Kyriakos Pantoglou, Nikolaos Mitianoudis, Nikos Papamarkos