Graph Contrastive Learning
Graph contrastive learning (GCL) is a self-supervised learning paradigm for graph-structured data that aims to learn robust and informative node or graph representations by contrasting augmented views of the same data. Current research focuses on improving data augmentation techniques to avoid information loss and noise, developing more sophisticated negative sampling strategies, and exploring the use of GCL in diverse applications like recommendation systems, fraud detection, and medical image analysis. These advancements are driving progress in various fields by enabling effective learning from large, unlabeled graph datasets, thereby reducing reliance on expensive manual annotation and improving the performance of downstream tasks.
Papers
Unsupervised Social Event Detection via Hybrid Graph Contrastive Learning and Reinforced Incremental Clustering
Yuanyuan Guo, Zehua Zang, Hang Gao, Xiao Xu, Rui Wang, Lixiang Liu, Jiangmeng Li
Understanding Community Bias Amplification in Graph Representation Learning
Shengzhong Zhang, Wenjie Yang, Yimin Zhang, Hongwei Zhang, Divin Yan, Zengfeng Huang
StructComp: Substituting Propagation with Structural Compression in Training Graph Contrastive Learning
Shengzhong Zhang, Wenjie Yang, Xinyuan Cao, Hongwei Zhang, Zengfeng Huang
GRENADE: Graph-Centric Language Model for Self-Supervised Representation Learning on Text-Attributed Graphs
Yichuan Li, Kaize Ding, Kyumin Lee
Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method
Yulan Hu, Sheng Ouyang, Jingyu Liu, Ge Chen, Zhirui Yang, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Yong Liu