Paper ID: 2410.15849

Focus Where It Matters: Graph Selective State Focused Attention Networks

Shikhar Vashistha, Neetesh Kumar

Traditional graph neural networks (GNNs) lack scalability and lose individual node characteristics due to over-smoothing, especially in the case of deeper networks. This results in sub-optimal feature representation, affecting the model's performance on tasks involving dynamically changing graphs. To address this issue, we present Graph Selective States Focused Attention Networks (GSANs) based neural network architecture for graph-structured data. The GSAN is enabled by multi-head masked self-attention (MHMSA) and selective state space modeling (S3M) layers to overcome the limitations of GNNs. In GSAN, the MHMSA allows GSAN to dynamically emphasize crucial node connections, particularly in evolving graph environments. The S3M layer enables the network to adjust dynamically in changing node states and improving predictions of node behavior in varying contexts without needing primary knowledge of the graph structure. Furthermore, the S3M layer enhances the generalization of unseen structures and interprets how node states influence link importance. With this, GSAN effectively outperforms inductive and transductive tasks and overcomes the issues that traditional GNNs experience. To analyze the performance behavior of GSAN, a set of state-of-the-art comparative experiments are conducted on graphs benchmark datasets, including $Cora$, $Citeseer$, $Pubmed$ network citation, and $protein-protein-interaction$ datasets, as an outcome, GSAN improved the classification accuracy by $1.56\%$, $8.94\%$, $0.37\%$, and $1.54\%$ on $F1-score$ respectively.

Submitted: Oct 21, 2024