Graph Contrastive Learning
Graph contrastive learning (GCL) is a self-supervised learning paradigm for graph-structured data that aims to learn robust and informative node or graph representations by contrasting augmented views of the same data. Current research focuses on improving data augmentation techniques to avoid information loss and noise, developing more sophisticated negative sampling strategies, and exploring the use of GCL in diverse applications like recommendation systems, fraud detection, and medical image analysis. These advancements are driving progress in various fields by enabling effective learning from large, unlabeled graph datasets, thereby reducing reliance on expensive manual annotation and improving the performance of downstream tasks.
Papers
SpeGCL: Self-supervised Graph Spectrum Contrastive Learning without Positive Samples
Yuntao Shou, Xiangyong Cao, Deyu Meng
GraphCLIP: Enhancing Transferability in Graph Foundation Models for Text-Attributed Graphs
Yun Zhu, Haizhou Shi, Xiaotang Wang, Yongchao Liu, Yaoke Wang, Boci Peng, Chuntao Hong, Siliang Tang