Graph Distillation

Graph distillation aims to create smaller, more efficient graph datasets that retain the key information of larger originals, thereby accelerating graph neural network (GNN) training and inference. Current research focuses on developing novel distillation algorithms, such as those leveraging structural attention, multi-granularity information, or eigenbasis matching, to effectively transfer knowledge from large teacher networks to smaller student networks. These techniques improve GNN performance on various tasks, including node and graph classification, instance segmentation, and semi-supervised continual learning, while reducing computational costs and data storage requirements. The resulting efficiency gains are particularly significant for applications with limited resources or large-scale datasets.

Papers