Hallucination Detection
Hallucination detection in large language models (LLMs) focuses on identifying instances where models generate plausible-sounding but factually incorrect information. Current research explores various approaches, including analyzing internal model representations (hidden states), leveraging unlabeled data, and employing ensemble methods or smaller, faster models for efficient detection. This is a critical area because accurate and reliable LLM outputs are essential for trustworthy applications across numerous domains, from healthcare and autonomous driving to information retrieval and code generation.
Papers
ETF: An Entity Tracing Framework for Hallucination Detection in Code Summaries
Kishan Maharaj, Vitobha Munigala, Srikanth G. Tamilselvam, Prince Kumar, Sayandeep Sen, Palani Kodeswaran, Abhijit Mishra, Pushpak Bhattacharyya
FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMs
Forrest Sheng Bao, Miaoran Li, Renyi Qu, Ge Luo, Erana Wan, Yujia Tang, Weisi Fan, Manveer Singh Tamber, Suleman Kazi, Vivek Sourabh, Mike Qi, Ruixuan Tu, Chenyu Xu, Matthew Gonzales, Ofer Mendelevitch, Amin Ahmad
ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability
Zhongxiang Sun, Xiaoxue Zang, Kai Zheng, Yang Song, Jun Xu, Xiao Zhang, Weijie Yu, Yang Song, Han Li
Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
Zhongye Liu, Hongbin Liu, Yuepeng Hu, Zedian Shao, Neil Zhenqiang Gong
FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning
Ruosen Li, Ziming Luo, Xinya Du
Listen to the Patient: Enhancing Medical Dialogue Generation with Patient Hallucination Detection and Mitigation
Lang Qin, Yao Zhang, Hongru Liang, Adam Jatowt, Zhenglu Yang