Explanation Graph

Explanation graphs aim to make the decision-making processes of complex models, particularly Graph Neural Networks (GNNs) and large language models (LLMs), more transparent and understandable. Current research focuses on developing methods to generate these graphs, often employing techniques like game theory, Bayesian inference, and contrastive learning to improve accuracy and address issues like learning bias and error propagation in graph generation. This work is crucial for building trust in AI systems, particularly in high-stakes applications where understanding model predictions is paramount, and for advancing the field of explainable AI.

Papers