Graph Rationale

Graph rationale research focuses on identifying the minimal, crucial substructures within a graph that explain a model's prediction, aiming to improve model interpretability, robustness, and generalization. Current work explores methods like variational inference, generative networks, and contrastive learning to extract these rationales, often incorporating environment-based augmentations or multi-generator architectures to address challenges like spurious correlations and limited learning signals. This research is significant for enhancing the reliability and trustworthiness of graph-based machine learning models across diverse applications, from molecule property prediction to recommendation systems.

Papers