Graph Transformer
Graph Transformers (GTs) are a class of neural networks designed to leverage the power of transformer architectures for analyzing graph-structured data, aiming to improve upon the limitations of traditional graph neural networks. Current research focuses on enhancing GT efficiency and scalability for large graphs, developing novel attention mechanisms to better capture complex relationships, and addressing challenges like over-smoothing and adversarial attacks through techniques such as adaptive attacks and sharpness-aware minimization. The improved performance and expressiveness of GTs are impacting diverse fields, including traffic forecasting, drug discovery, and brain network analysis, by enabling more accurate and efficient modeling of complex relationships within these domains.
Papers
G$^2$V$^2$former: Graph Guided Video Vision Transformer for Face Anti-Spoofing
Jingyi Yang, Zitong Yu, Xiuming Ni, Jia He, Hui Li
Graph Triple Attention Network: A Decoupled Perspective
Xiaotang Wang, Yun Zhu, Haizhou Shi, Yongchao Liu, Chuntao Hong
Bridging Training and Execution via Dynamic Directed Graph-Based Communication in Cooperative Multi-Agent Systems
Zhuohui Zhang, Bin He, Bin Cheng, Gang Li