Graph Pretraining

Graph pretraining aims to train generalizable graph neural networks (GNNs) by learning from diverse and massive graph datasets before fine-tuning on specific tasks. Current research focuses on developing scalable architectures, such as transformer-based models and Perceiver-based encoders, capable of handling large-scale, multi-domain graphs, often employing masked autoencoders or contrastive learning for pretraining. This approach promises to significantly reduce the need for extensive dataset-specific model tuning, leading to more efficient and robust GNNs across various applications, including molecular property prediction and web-scale graph analysis. Furthermore, automated graph machine learning is emerging to optimize the design and hyperparameters of these models.

Papers