Explainable AI
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing the "black box" problem inherent in many machine learning systems. Current research focuses on developing and evaluating various XAI methods, including those based on feature attribution (e.g., SHAP values), counterfactual explanations, and the integration of large language models for generating human-interpretable explanations across diverse data types (images, text, time series). The significance of XAI lies in its potential to improve trust in AI systems, facilitate debugging and model improvement, and enable responsible AI deployment in high-stakes applications like healthcare and finance.
Papers
Perturbation on Feature Coalition: Towards Interpretable Deep Neural Networks
Xuran Hu, Mingzhe Zhu, Zhenpeng Feng, Miloš Daković, Ljubiša Stanković
iSee: Advancing Multi-Shot Explainable AI Using Case-based Recommendations
Anjana Wijekoon, Nirmalie Wiratunga, David Corsar, Kyle Martin, Ikechukwu Nkisi-Orji, Chamath Palihawadana, Marta Caro-Martínez, Belen Díaz-Agudo, Derek Bridge, Anne Liret
VALE: A Multimodal Visual and Language Explanation Framework for Image Classifiers using eXplainable AI and Language Models
Purushothaman Natarajan, Athira Nambiar