Graph Transformer
Graph Transformers (GTs) are a class of neural networks designed to leverage the power of transformer architectures for analyzing graph-structured data, aiming to improve upon the limitations of traditional graph neural networks. Current research focuses on enhancing GT efficiency and scalability for large graphs, developing novel attention mechanisms to better capture complex relationships, and addressing challenges like over-smoothing and adversarial attacks through techniques such as adaptive attacks and sharpness-aware minimization. The improved performance and expressiveness of GTs are impacting diverse fields, including traffic forecasting, drug discovery, and brain network analysis, by enabling more accurate and efficient modeling of complex relationships within these domains.
Papers
Multi-omics Sampling-based Graph Transformer for Synthetic Lethality Prediction
Xusheng Zhao, Hao Liu, Qiong Dai, Hao Peng, Xu Bai, Huailiang Peng
SignGT: Signed Attention-based Graph Transformer for Graph Representation Learning
Jinsong Chen, Gaichao Li, John E. Hopcroft, Kun He
LPFormer: An Adaptive Graph Transformer for Link Prediction
Harry Shomer, Yao Ma, Haitao Mao, Juanhui Li, Bo Wu, Jiliang Tang
Global Minima, Recoverability Thresholds, and Higher-Order Structure in GNNS
Drake Brown, Trevor Garrity, Kaden Parker, Jason Oliphant, Stone Carson, Cole Hanson, Zachary Boyd
Graph Transformer Network for Flood Forecasting with Heterogeneous Covariates
Jimeng Shi, Vitalii Stebliankin, Zhaonan Wang, Shaowen Wang, Giri Narasimhan
Atom-Motif Contrastive Transformer for Molecular Property Prediction
Wentao Yu, Shuo Chen, Chen Gong, Gang Niu, Masashi Sugiyama