Factual Inconsistency
Factual inconsistency in machine-generated text, particularly from large language models (LLMs), is a significant research area aiming to improve the reliability and trustworthiness of AI-generated content. Current research focuses on developing robust methods for detecting and correcting these inconsistencies, employing techniques like fine-grained fact decomposition, prompt-based classification, and the integration of natural language inference models enhanced with task-specific taxonomies. These advancements are crucial for mitigating the spread of misinformation and improving the overall quality and dependability of AI-powered text generation systems across various applications.
Papers
October 29, 2024
July 1, 2024
April 17, 2024
February 20, 2024
December 2, 2023
October 18, 2023
October 8, 2023
May 23, 2023
October 22, 2022
May 12, 2022