Graph Foundation Model
Graph Foundation Models (GFMs) aim to create generalizable graph neural network models, pre-trained on massive and diverse graph datasets, to improve performance across various downstream tasks and domains. Current research focuses on developing effective self-supervised pre-training methods, exploring architectures like Graph Mixture-of-Experts and multi-headed graph convolutional networks, and addressing challenges such as handling heterogeneous graph structures and features. The development of GFMs promises to significantly advance graph machine learning by reducing the need for task-specific model training and enabling more efficient and robust applications in diverse fields like network analysis, materials science, and recommendation systems.
Papers
Scalable Training of Trustworthy and Energy-Efficient Predictive Graph Foundation Models for Atomistic Materials Modeling: A Case Study with HydraGNN
Massimiliano Lupo Pasini, Jong Youl Choi, Kshitij Mehta, Pei Zhang, David Rogers, Jonghyun Bae, Khaled Z. Ibrahim, Ashwin M. Aji, Karl W. Schulz, Jorda Polo, Prasanna Balaprakash
GraphFM: A Comprehensive Benchmark for Graph Foundation Model
Yuhao Xu, Xinqi Liu, Keyu Duan, Yi Fang, Yu-Neng Chuang, Daochen Zha, Qiaoyu Tan