Paper ID: 2503.02988 • Published Mar 4, 2025
Out-of-Distribution Generalization on Graphs via Progressive Inference
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
The development and evaluation of graph neural networks (GNNs) generally
follow the independent and identically distributed (i.i.d.) assumption. Yet
this assumption is often untenable in practice due to the uncontrollable data
generation mechanism. In particular, when the data distribution shows a
significant shift, most GNNs would fail to produce reliable predictions and may
even make decisions randomly. One of the most promising solutions to improve
the model generalization is to pick out causal invariant parts in the input
graph. Nonetheless, we observe a significant distribution gap between the
causal parts learned by existing methods and the ground truth, leading to
undesirable performance. In response to the above issues, this paper presents
GPro, a model that learns graph causal invariance with progressive inference.
Specifically, the complicated graph causal invariant learning is decomposed
into multiple intermediate inference steps from easy to hard, and the
perception of GPro is continuously strengthened through a progressive inference
process to extract causal features that are stable to distribution shifts. We
also enlarge the training distribution by creating counterfactual samples to
enhance the capability of the GPro in capturing the causal invariant parts.
Extensive experiments demonstrate that our proposed GPro outperforms the
state-of-the-art methods by 4.91% on average. For datasets with more severe
distribution shifts, the performance improvement can be up to 6.86%.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.