Paper ID: 2405.19375

Improving global awareness of linkset predictions using Cross-Attentive Modulation tokens

Félix Marcoccia, Cédric Adjih, Paul Mühlethaler

This work introduces Cross-Attentive Modulation (CAM) tokens, which are tokens whose initial value is learned, gather information through cross-attention, and modulate the nodes and edges accordingly. These tokens are meant to improve the global awareness of link predictions models which, based on graph neural networks, can struggle to capture graph-level features. This lack of ability to feature high level representations is particularly limiting when predicting multiple or entire sets of links. We implement CAM tokens in a simple attention-based link prediction model and in a graph transformer, which we also use in a denoising diffusion framework. A brief introduction to our toy datasets will then be followed by benchmarks which prove that CAM token improve the performance of the model they supplement and outperform a baseline with diverse statistical graph attributes.

Submitted: May 28, 2024