Graph Representation Learning
Graph representation learning aims to encode graph-structured data into low-dimensional vector representations suitable for machine learning tasks. Current research focuses on improving the expressiveness and efficiency of graph neural networks (GNNs), exploring alternative approaches like topological embeddings and leveraging large language models for enhanced interpretability and handling of text-attributed graphs. These advancements are crucial for tackling challenges in various domains, including recommendation systems, anomaly detection, and biological data analysis, where graph-structured data is prevalent and efficient, accurate analysis is critical.
Papers
A Graph is Worth $K$ Words: Euclideanizing Graph using Pure Transformer
Zhangyang Gao, Daize Dong, Cheng Tan, Jun Xia, Bozhen Hu, Stan Z. Li
Advancing Graph Representation Learning with Large Language Models: A Comprehensive Survey of Techniques
Qiheng Mao, Zemin Liu, Chenghao Liu, Zhuo Li, Jianling Sun
L2G2G: a Scalable Local-to-Global Network Embedding with Graph Autoencoders
Ruikang Ouyang, Andrew Elliott, Stratis Limnios, Mihai Cucuringu, Gesine Reinert
A Survey of Few-Shot Learning on Graphs: from Meta-Learning to Pre-Training and Prompt Learning
Xingtong Yu, Yuan Fang, Zemin Liu, Yuxia Wu, Zhihao Wen, Jianyuan Bo, Xinming Zhang, Steven C.H. Hoi