Graph Transformer
Graph Transformers (GTs) are a class of neural networks designed to leverage the power of transformer architectures for analyzing graph-structured data, aiming to improve upon the limitations of traditional graph neural networks. Current research focuses on enhancing GT efficiency and scalability for large graphs, developing novel attention mechanisms to better capture complex relationships, and addressing challenges like over-smoothing and adversarial attacks through techniques such as adaptive attacks and sharpness-aware minimization. The improved performance and expressiveness of GTs are impacting diverse fields, including traffic forecasting, drug discovery, and brain network analysis, by enabling more accurate and efficient modeling of complex relationships within these domains.
Papers
DST-GTN: Dynamic Spatio-Temporal Graph Transformer Network for Traffic Forecasting
Songtao Huang, Hongjin Song, Tianqi Jiang, Akbar Telikani, Jun Shen, Qingguo Zhou, Binbin Yong, Qiang Wu
Node-like as a Whole: Structure-aware Searching and Coarsening for Graph Classification
Xiaorui Qi, Qijie Bai, Yanlong Wen, Haiwei Zhang, Xiaojie Yuan
A Survey on Self-Supervised Graph Foundation Models: Knowledge-Based Perspective
Ziwen Zhao, Yixin Su, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang
VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections
Dongqi Fu, Zhigang Hua, Yan Xie, Jin Fang, Si Zhang, Kaan Sancak, Hao Wu, Andrey Malevich, Jingrui He, Bo Long