Text Attributed Graph
Text-attributed graphs (TAGs) combine graph structures with textual node and/or edge information, presenting a rich data representation for various applications. Current research focuses on leveraging large language models (LLMs) to enhance graph representation learning, often integrating LLMs with graph neural networks (GNNs) or employing LLMs for data augmentation and prompt-based learning. This interdisciplinary approach aims to improve the accuracy and efficiency of graph-based tasks like node classification and link prediction, impacting fields such as social network analysis, recommendation systems, and bioinformatics. The development of standardized datasets and benchmarks is also a key area of focus, facilitating more robust comparisons and advancements in the field.
Papers
DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs
Jiasheng Zhang, Jialin Chen, Menglin Yang, Aosong Feng, Shuang Liang, Jie Shao, Rex Ying
UniGLM: Training One Unified Language Model for Text-Attributed Graphs
Yi Fang, Dongzhe Fan, Sirui Ding, Ninghao Liu, Qiaoyu Tan
GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models
Yi Fang, Dongzhe Fan, Daochen Zha, Qiaoyu Tan