Factual Claim
Factual claim verification in natural language processing focuses on assessing the accuracy of statements generated by large language models (LLMs) and other text sources. Current research emphasizes improving LLM factuality through techniques like self-consistency decoding, which leverages multiple model outputs to enhance accuracy, and comparative methods that contrast model predictions against "hallucinatory" and truthful comparators. This field is crucial for mitigating the spread of misinformation and enhancing the reliability of AI-generated content across various applications, from legal and medical domains to general knowledge retrieval.
Papers
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, Yue Zhang
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong