GNN Explanation
Explaining the decisions of Graph Neural Networks (GNNs) is crucial for building trust and understanding in their applications. Current research focuses on developing methods that identify key subgraphs or structural motifs contributing to GNN predictions, often employing techniques like game theory (Shapley values), attention mechanisms, and generative models to create interpretable explanations. These efforts aim to address the "black box" nature of GNNs, improving model transparency and facilitating their use in high-stakes domains like healthcare and finance where understanding model decisions is paramount. A significant challenge remains in ensuring the robustness and reliability of these explanations, particularly in the face of adversarial attacks or noisy data.
Papers
Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Xu Zheng, Farhad Shirani, Tianchun Wang, Wei Cheng, Zhuomin Chen, Haifeng Chen, Hua Wei, Dongsheng Luo
GNNX-BENCH: Unravelling the Utility of Perturbation-based GNN Explainers through In-depth Benchmarking
Mert Kosan, Samidha Verma, Burouj Armgaan, Khushbu Pahwa, Ambuj Singh, Sourav Medya, Sayan Ranu