Interpretable Graph Neural Network
Interpretable Graph Neural Networks (IGNNs) aim to improve the transparency and understandability of graph neural networks, which are powerful but often "black box" models. Current research focuses on developing novel architectures and algorithms that generate explanations alongside predictions, often by incorporating attention mechanisms, causal inference, or graph kernel methods to highlight important features and subgraphs. This work is significant because it addresses the critical need for trust and accountability in applications of GNNs across diverse fields, including neuroscience, healthcare, and materials science, where understanding model decisions is crucial.
Papers
November 5, 2024
August 26, 2024
August 14, 2024
July 2, 2024
June 12, 2024
May 29, 2024
February 7, 2024
August 17, 2023
May 17, 2023
June 30, 2022
April 27, 2022
March 29, 2022
January 30, 2022