Knowledge Graph to Text
Knowledge graph to text (KG-to-text) generation aims to automatically convert structured knowledge graphs into coherent and human-readable text, bridging the gap between machine-understandable data and human comprehension. Current research focuses on improving the faithfulness and fluency of generated text by incorporating multi-granularity graph structure attention mechanisms within pre-trained language models (PLMs), such as transformers, and employing techniques like contrastive learning and reinforcement learning to enhance model accuracy. This field is significant for advancing natural language generation, enabling more effective knowledge sharing and facilitating applications in areas like conversational AI and text summarization.
Papers
September 25, 2024
September 16, 2024
September 11, 2024
August 14, 2023
August 12, 2023
July 14, 2023
July 4, 2023
July 2, 2022
April 13, 2022