Graph Explainability

Graph explainability focuses on making the decisions of graph neural networks (GNNs) more transparent and understandable. Current research emphasizes developing methods that identify crucial subgraphs or individual nodes and edges contributing most significantly to a GNN's prediction, often employing game-theoretic approaches like Shapley values or attention mechanisms within GNN architectures to quantify feature importance. This field is vital for building trust in GNN applications across diverse domains, from recommender systems and drug discovery to biomedical hypothesis generation, by providing insights into model behavior and facilitating the detection of biases or vulnerabilities. Furthermore, research is actively exploring how to best present these explanations to users, comparing the effectiveness of graph visualizations versus textual summaries generated by large language models.

Papers