Factual Claim
Factual claim verification in natural language processing focuses on assessing the accuracy of statements generated by large language models (LLMs) and other text sources. Current research emphasizes improving LLM factuality through techniques like self-consistency decoding, which leverages multiple model outputs to enhance accuracy, and comparative methods that contrast model predictions against "hallucinatory" and truthful comparators. This field is crucial for mitigating the spread of misinformation and enhancing the reliability of AI-generated content across various applications, from legal and medical domains to general knowledge retrieval.
Papers
Semantic Consistency-Based Uncertainty Quantification for Factuality in Radiology Report Generation
Chenyu Wang, Weichao Zhou, Shantanu Ghosh, Kayhan Batmanghelich, Wenchao Li
T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts
Ziwei Huang, Wanggui He, Quanyu Long, Yandi Wang, Haoyuan Li, Zhelun Yu, Fangxun Shu, Long Chen, Hao Jiang, Leilei Gan