Graph Neural Network Explanation

Explaining the predictions of Graph Neural Networks (GNNs) is crucial for building trust and understanding in their applications. Current research focuses on developing robust explanation methods, including those based on motif identification and counterfactual reasoning, while simultaneously addressing the challenge of evaluating explanation quality and their vulnerability to adversarial attacks. Improved GNN explainability is vital for enhancing the reliability and adoption of GNNs across diverse scientific fields and practical applications, particularly where interpretability is paramount, such as in drug discovery or social network analysis.

Papers