Graph Transformer
Graph Transformers (GTs) are a class of neural networks designed to leverage the power of transformer architectures for analyzing graph-structured data, aiming to improve upon the limitations of traditional graph neural networks. Current research focuses on enhancing GT efficiency and scalability for large graphs, developing novel attention mechanisms to better capture complex relationships, and addressing challenges like over-smoothing and adversarial attacks through techniques such as adaptive attacks and sharpness-aware minimization. The improved performance and expressiveness of GTs are impacting diverse fields, including traffic forecasting, drug discovery, and brain network analysis, by enabling more accurate and efficient modeling of complex relationships within these domains.
Papers
Transformers are efficient hierarchical chemical graph learners
Zihan Pengmei, Zimu Li, Chih-chan Tien, Risi Kondor, Aaron R. Dinner
Revisiting Mobility Modeling with Graph: A Graph Transformer Model for Next Point-of-Interest Recommendation
Xiaohang Xu, Toyotaro Suzumura, Jiawei Yong, Masatoshi Hanai, Chuang Yang, Hiroki Kanezashi, Renhe Jiang, Shintaro Fukushima
MeT: A Graph Transformer for Semantic Segmentation of 3D Meshes
Giuseppe Vecchio, Luca Prezzavento, Carmelo Pino, Francesco Rundo, Simone Palazzo, Concetto Spampinato
Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal Brain Functional Connectome Embedding
Zijian Dong, Yilei Wu, Yu Xiao, Joanna Su Xian Chong, Yueming Jin, Juan Helen Zhou