XAI Method
Explainable AI (XAI) methods aim to make the decision-making processes of complex machine learning models more transparent and understandable. Current research focuses on developing robust evaluation frameworks for existing XAI techniques, including those based on feature attribution, surrogate models, and concept-based explanations, and addressing challenges like the generation of out-of-distribution samples and the impact of multicollinearity. This work is crucial for building trust in AI systems across various domains, particularly in high-stakes applications like healthcare and finance, where interpretability and accountability are paramount. The development of standardized evaluation metrics and the exploration of user-centric approaches are key areas of ongoing investigation.
Papers
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting
Pierre-Daniel Arsenault, Shengrui Wang, Jean-Marc Patenande
The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis
Benjamin Fresz, Vincent Philipp Göbels, Safa Omri, Danilo Brajovic, Andreas Aichele, Janika Kutz, Jens Neuhüttler, Marco F. Huber
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis
Prithwijit Chowdhury, Mohit Prabhushankar, Ghassan AlRegib, Mohamed Deriche
Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems
Ryan Zhou, Jaume Bacardit, Alexander Brownlee, Stefano Cagnoni, Martin Fyvie, Giovanni Iacca, John McCall, Niki van Stein, David Walker, Ting Hu