Textual Graph
Textual graphs represent data where nodes contain textual information and edges signify relationships, enabling advanced analysis across diverse domains. Current research focuses on improving representation learning of these graphs, employing techniques like graph autoencoders and retrieval-augmented generation (RAG) methods to effectively capture both textual and structural information, often leveraging large language models (LLMs) for enhanced text encoding. These advancements aim to improve downstream tasks such as node classification, link prediction, and question answering on textual graphs, impacting fields ranging from knowledge graph reasoning to scene graph understanding. Efficient training and inference methods for LLMs applied to textual graphs are also a key area of investigation.