Interpretable AI
Interpretable AI (IAI) aims to create artificial intelligence systems whose decision-making processes are transparent and understandable, addressing concerns about "black box" models. Current research focuses on developing methods to quantify and improve the consistency of explanations, applying these techniques to various model architectures including deep learning networks, and adapting game-theoretic approaches like Shapley values for improved interpretability. This work is crucial for building trust in AI systems across diverse fields like healthcare, finance, and legal applications, where understanding the reasoning behind AI decisions is paramount for responsible deployment and effective human-AI collaboration.
Papers
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Qi Huang, Emanuele Mezzi, Osman Mutlu, Miltiadis Kofinas, Vidya Prasad, Shadnan Azwad Khan, Elena Ranguelova, Niki van Stein
End-to-end Stroke imaging analysis, using reservoir computing-based effective connectivity, and interpretable Artificial intelligence
Wojciech Ciezobka, Joan Falco-Roget, Cemal Koba, Alessandro Crimi