Interpretable Graph

Interpretable graph learning aims to develop graph-based machine learning models whose predictions are readily understandable and trustworthy, addressing the "black box" nature of many deep learning approaches. Current research focuses on developing inherently interpretable models, often employing graph neural networks (GNNs) with attention mechanisms or probabilistic logic programming, as well as post-hoc explanation methods that analyze existing GNNs to extract meaningful insights. This field is crucial for applications demanding transparency and accountability, such as drug discovery, healthcare, and scientific knowledge discovery, where understanding the model's reasoning is as important as its predictive accuracy.

Papers