xAI Community
The xAI community focuses on developing and applying methods to make the decision-making processes of artificial intelligence models more transparent and understandable. Current research emphasizes improving the interpretability of various model architectures, including deep neural networks, through techniques like SHAP, LIME, and Grad-CAM, and exploring the use of large language models to translate technical explanations into user-friendly formats. This work is crucial for building trust in AI systems across diverse fields, from healthcare diagnostics and financial forecasting to engineering applications, and for ensuring responsible AI development and deployment.
Papers
A Survey of Explainable Artificial Intelligence (XAI) in Financial Time Series Forecasting
Pierre-Daniel Arsenault, Shengrui Wang, Jean-Marc Patenande
The Contribution of XAI for the Safe Development and Certification of AI: An Expert-Based Analysis
Benjamin Fresz, Vincent Philipp Göbels, Safa Omri, Danilo Brajovic, Andreas Aichele, Janika Kutz, Jens Neuhüttler, Marco F. Huber