Knowledge Graph to Text

Knowledge graph to text (KG-to-text) generation aims to automatically convert structured knowledge graphs into coherent and human-readable text, bridging the gap between machine-understandable data and human comprehension. Current research focuses on improving the faithfulness and fluency of generated text by incorporating multi-granularity graph structure attention mechanisms within pre-trained language models (PLMs), such as transformers, and employing techniques like contrastive learning and reinforcement learning to enhance model accuracy. This field is significant for advancing natural language generation, enabling more effective knowledge sharing and facilitating applications in areas like conversational AI and text summarization.

Papers