Self Explainable Graph Neural Network

Self-explainable Graph Neural Networks (GNNs) aim to improve the interpretability of GNNs by integrating explanation generation directly into the prediction process, rather than relying on separate post-hoc methods. Current research focuses on developing novel architectures, such as those employing meta-learning, information bottleneck principles, and prototype-based reasoning, to generate faithful and accurate explanations alongside predictions, particularly in few-shot learning scenarios and for tasks like link prediction. This work is crucial for building trust and facilitating wider adoption of GNNs in high-stakes applications where understanding the model's decision-making process is paramount, such as in medicine and other critical domains.

Papers