Scalable Graph Transformer

Scalable graph transformers aim to leverage the power of transformer architectures for graph representation learning while overcoming the computational limitations of traditional approaches, particularly the quadratic complexity of self-attention. Current research focuses on developing efficient attention mechanisms, such as those employing anchor nodes or sparse attention, and pre-training strategies on massive datasets to improve generalization across diverse graphs and tasks. These advancements enable the application of graph transformers to large-scale real-world problems, such as node classification in massive social networks or knowledge graphs, significantly impacting various fields requiring relational data analysis.

Papers