Paper ID: 2504.12474 • Published Apr 16, 2025
Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTex
Azadeh Beiranvand, Seyed Mehdi Vahidipour
University of Kashan
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Text-attributed graphs (TAGs) present unique challenges in representation
learning by requiring models to capture both the semantic richness of
node-associated texts and the structural dependencies of the graph. While graph
neural networks (GNNs) excel at modeling topological information, they lack the
capacity to process unstructured text. Conversely, large language models (LLMs)
are proficient in text understanding but are typically unaware of graph
structure. In this work, we propose BiGTex (Bidirectional Graph Text), a novel
architecture that tightly integrates GNNs and LLMs through stacked Graph-Text
Fusion Units. Each unit allows for mutual attention between textual and
structural representations, enabling information to flow in both directions,
text influencing structure and structure guiding textual interpretation. The
proposed architecture is trained using parameter-efficient fine-tuning (LoRA),
keeping the LLM frozen while adapting to task-specific signals. Extensive
experiments on five benchmark datasets demonstrate that BiGTex achieves
state-of-the-art performance in node classification and generalizes effectively
to link prediction. An ablation study further highlights the importance of soft
prompting and bi-directional attention in the model's success.