Paper ID: 2407.11025
Backdoor Graph Condensation
Jiahao Wu, Ning Lu, Zeiyu Dai, Wenqi Fan, Shengcai Liu, Qing Li, Ke Tang
Recently, graph condensation has emerged as a prevalent technique to improve the training efficiency for graph neural networks (GNNs). It condenses a large graph into a small one such that a GNN trained on this small synthetic graph can achieve comparable performance to a GNN trained on a large graph. However, while existing graph condensation studies mainly focus on the best trade-off between graph size and the GNNs' performance (model utility), the security issues of graph condensation have not been studied. To bridge this research gap, we propose the task of backdoor graph condensation. While graph backdoor attacks have been extensively explored, applying existing graph backdoor methods for graph condensation is not practical since they can undermine the model utility and yield low attack success rate. To alleviate these issues, we introduce two primary objectives for backdoor attacks against graph condensation: 1) the injection of triggers cannot affect the quality of condensed graphs, maintaining the utility of GNNs trained on them; and 2) the effectiveness of triggers should be preserved throughout the condensation process, achieving high attack success rate. To pursue the objectives, we devise the first backdoor attack against graph condensation, denoted as BGC. Specifically, we inject triggers during condensation and iteratively update the triggers to ensure effective attacks. Further, we propose a poisoned node selection module to minimize the influence of triggers on condensed graphs' quality. The extensive experiments demonstrate the effectiveness of our attack. BGC achieves a high attack success rate (close to 1.0) and good model utility in all cases. Furthermore, the results demonstrate our method's resilience against multiple defense methods. Finally, we conduct comprehensive studies to analyze the factors that influence the attack performance.
Submitted: Jul 3, 2024