Graph Neural Network Explanation
Explaining the predictions of Graph Neural Networks (GNNs) is crucial for building trust and understanding in their applications. Current research focuses on developing robust explanation methods, including those based on motif identification and counterfactual reasoning, while simultaneously addressing the challenge of evaluating explanation quality and their vulnerability to adversarial attacks. Improved GNN explainability is vital for enhancing the reliability and adoption of GNNs across diverse scientific fields and practical applications, particularly where interpretability is paramount, such as in drug discovery or social network analysis.
Papers
June 5, 2024
May 21, 2024
May 11, 2024
September 28, 2023
January 7, 2023
June 28, 2022
February 17, 2022