Factual Error
Factual error in text generated by large language models (LLMs) is a significant research area aiming to improve the accuracy and reliability of AI-generated content. Current research focuses on developing methods for detecting and correcting these errors, employing techniques like ensemble prompting, external knowledge retrieval, and iterative constrained editing within various model architectures including transformer-based LLMs. These advancements are crucial for enhancing the trustworthiness of AI-generated summaries, reports, and other textual outputs across diverse applications, ranging from scientific literature review to healthcare and finance. The ultimate goal is to create more reliable and responsible AI systems.
Papers
October 21, 2024
June 18, 2024
April 30, 2024
April 5, 2024
February 27, 2024
February 19, 2024
February 13, 2024
January 1, 2024
December 15, 2023
December 12, 2023
October 26, 2023
September 26, 2023
July 25, 2023
June 8, 2023
May 26, 2023
May 24, 2023
May 22, 2023
May 13, 2023
November 22, 2022
May 25, 2022