Subgraph Explanation
Subgraph explanation aims to make the predictions of graph neural networks (GNNs) more interpretable by identifying the crucial subgraphs within the input data that drive model decisions. Current research focuses on developing efficient algorithms, often leveraging game theory or information-theoretic principles, to identify these subgraphs, with a strong emphasis on ensuring both the fidelity and sparsity of the explanations. This field is significant because it addresses the "black box" nature of GNNs, fostering trust and enabling deeper understanding in diverse applications ranging from drug discovery to social network analysis.
Papers
PAC Learnability under Explanation-Preserving Graph Perturbations
Xu Zheng, Farhad Shirani, Tianchun Wang, Shouwei Gao, Wenqian Dong, Wei Cheng, Dongsheng Luo
Incorporating Retrieval-based Causal Learning with Information Bottlenecks for Interpretable Graph Neural Networks
Jiahua Rao, Jiancong Xie, Hanjing Lin, Shuangjia Zheng, Zhen Wang, Yuedong Yang