Paper ID: 2303.08644

Feature propagation as self-supervision signals on graphs

Oscar Pina, VerĂ³nica Vilaplana

Self-supervised learning is gaining considerable attention as a solution to avoid the requirement of extensive annotations in representation learning on graphs. Current algorithms are based on contrastive learning, which is computation an memory expensive, and the assumption of invariance under certain graph augmentations. However, graph transformations such as edge sampling may modify the semantics of the data so that the iinvariance assumption may be incorrect. We introduce Regularized Graph Infomax (RGI), a simple yet effective framework for node level self-supervised learning that trains a graph neural network encoder by maximizing the mutual information between output node embeddings and their propagation through the graph, which encode the nodes' local and global context, respectively. RGI do not use graph data augmentations but instead generates self-supervision signals with feature propagation, is non-contrastive and does not depend on a two branch architecture. We run RGI on both transductive and inductive settings with popular graph benchmarks and show that it can achieve state-of-the-art performance regardless of its simplicity.

Submitted: Mar 15, 2023