Hallucination Detection
Hallucination detection in large language models (LLMs) focuses on identifying instances where models generate plausible-sounding but factually incorrect information. Current research explores various approaches, including analyzing internal model representations (hidden states), leveraging unlabeled data, and employing ensemble methods or smaller, faster models for efficient detection. This is a critical area because accurate and reliable LLM outputs are essential for trustworthy applications across numerous domains, from healthcare and autonomous driving to information retrieval and code generation.
86papers
Papers - Page 3
December 17, 2024
November 22, 2024
November 18, 2024
November 12, 2024
November 8, 2024
November 5, 2024
October 29, 2024
October 17, 2024
ETF: An Entity Tracing Framework for Hallucination Detection in Code Summaries
Kishan Maharaj, Vitobha Munigala, Srikanth G. Tamilselvam, Prince Kumar, Sayandeep Sen, Palani Kodeswaran, Abhijit Mishra, Pushpak BhattacharyyaFaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMs
Forrest Sheng Bao, Miaoran Li, Renyi Qu, Ge Luo, Erana Wan, Yujia Tang, Weisi Fan, Manveer Singh Tamber, Suleman Kazi, Vivek Sourabh, Mike Qi+5
October 16, 2024
October 15, 2024
ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability
Zhongxiang Sun, Xiaoxue Zang, Kai Zheng, Yang Song, Jun Xu, Xiao Zhang, Weijie Yu, Yang Song, Han LiAutomatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models
Zhongye Liu, Hongbin Liu, Yuepeng Hu, Zedian Shao, Neil Zhenqiang Gong