Explanation Graph
Explanation graphs aim to make the decision-making processes of complex models, particularly Graph Neural Networks (GNNs) and large language models (LLMs), more transparent and understandable. Current research focuses on developing methods to generate these graphs, often employing techniques like game theory, Bayesian inference, and contrastive learning to improve accuracy and address issues like learning bias and error propagation in graph generation. This work is crucial for building trust in AI systems, particularly in high-stakes applications where understanding model predictions is paramount, and for advancing the field of explainable AI.
Papers
September 24, 2024
July 22, 2024
May 8, 2024
August 30, 2023
June 1, 2023
September 15, 2022