Scalable Graph Transformer
Scalable graph transformers aim to leverage the power of transformer architectures for graph representation learning while overcoming the computational limitations of traditional approaches, particularly the quadratic complexity of self-attention. Current research focuses on developing efficient attention mechanisms, such as those employing anchor nodes or sparse attention, and pre-training strategies on massive datasets to improve generalization across diverse graphs and tasks. These advancements enable the application of graph transformers to large-scale real-world problems, such as node classification in massive social networks or knowledge graphs, significantly impacting various fields requiring relational data analysis.
Papers
July 4, 2024
May 6, 2024
January 26, 2024
December 18, 2023
June 14, 2023
March 10, 2023