Graph Explainability
Graph explainability focuses on making the decisions of graph neural networks (GNNs) more transparent and understandable. Current research emphasizes developing methods that identify crucial subgraphs or individual nodes and edges contributing most significantly to a GNN's prediction, often employing game-theoretic approaches like Shapley values or attention mechanisms within GNN architectures to quantify feature importance. This field is vital for building trust in GNN applications across diverse domains, from recommender systems and drug discovery to biomedical hypothesis generation, by providing insights into model behavior and facilitating the detection of biases or vulnerabilities. Furthermore, research is actively exploring how to best present these explanations to users, comparing the effectiveness of graph visualizations versus textual summaries generated by large language models.
Papers
Explainable Biomedical Hypothesis Generation via Retrieval Augmented Generation enabled Large Language Models
Alexander R. Pelletier, Joseph Ramirez, Irsyad Adam, Simha Sankar, Yu Yan, Ding Wang, Dylan Steinecke, Wei Wang, Peipei Ping
Evaluating graph-based explanations for AI-based recommender systems
Simon Delarue, Astrid Bertrand, Tiphaine Viard
Evaluating Explainability for Graph Neural Networks
Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, Marinka Zitnik
UKP-SQuARE v2: Explainability and Adversarial Attacks for Trustworthy QA
Rachneet Sachdeva, Haritz Puerto, Tim Baumgärtner, Sewin Tariverdian, Hao Zhang, Kexin Wang, Hossain Shaikh Saadi, Leonardo F. R. Ribeiro, Iryna Gurevych