Factual Consistency
Factual consistency in text generation, particularly in summarization, focuses on ensuring automatically generated text accurately reflects the source material, minimizing hallucinations and inconsistencies. Current research emphasizes developing robust evaluation metrics, often leveraging large language models (LLMs) and techniques like natural language inference and question answering, to identify and quantify factual errors at various granularities (e.g., sentence, entity, or fact level). These advancements are crucial for improving the reliability and trustworthiness of AI-generated content across diverse applications, ranging from clinical notes to news summaries, and for building more responsible and effective natural language processing systems.
Papers
SYNFAC-EDIT: Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization
Prakamya Mishra, Zonghai Yao, Parth Vashisht, Feiyun Ouyang, Beining Wang, Vidhi Dhaval Mody, Hong Yu
Factual Consistency Evaluation of Summarisation in the Era of Large Language Models
Zheheng Luo, Qianqian Xie, Sophia Ananiadou
Synthetic Imitation Edit Feedback for Factual Alignment in Clinical Summarization
Prakamya Mishra, Zonghai Yao, Shuwei Chen, Beining Wang, Rohan Mittal, Hong Yu
Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs
Huawen Feng, Yan Fan, Xiong Liu, Ting-En Lin, Zekun Yao, Yuchuan Wu, Fei Huang, Yongbin Li, Qianli Ma