xAI Community
The xAI community focuses on developing and applying methods to make the decision-making processes of artificial intelligence models more transparent and understandable. Current research emphasizes improving the interpretability of various model architectures, including deep neural networks, through techniques like SHAP, LIME, and Grad-CAM, and exploring the use of large language models to translate technical explanations into user-friendly formats. This work is crucial for building trust in AI systems across diverse fields, from healthcare diagnostics and financial forecasting to engineering applications, and for ensuring responsible AI development and deployment.
Papers
Exploring XAI for the Arts: Explaining Latent Space in Generative Music
Nick Bryan-Kinns, Berker Banar, Corey Ford, Courtney N. Reed, Yixiao Zhang, Simon Colton, Jack Armitage
Explainable AI applications in the Medical Domain: a systematic review
Nicoletta Prentzas, Antonis Kakas, Constantinos S. Pattichis