Post Hoc eXplainable AI
Post-hoc explainable AI (XAI) focuses on interpreting the decisions of already-trained machine learning models, particularly deep learning models like convolutional neural networks and knowledge graph embedding models, to improve transparency and trustworthiness. Current research emphasizes developing methods that generate faithful and localized explanations, often by leveraging background knowledge (e.g., knowledge graphs, concept hierarchies) or incorporating domain expertise to refine attribution maps and address issues like uncertainty and the Rashomon effect (multiple plausible explanations). These advancements are crucial for building trust in AI systems across diverse applications, from medical diagnosis to knowledge graph reasoning, by providing human-understandable justifications for model predictions.