Hallucination Detection
Hallucination detection in large language models (LLMs) focuses on identifying instances where models generate plausible-sounding but factually incorrect information. Current research explores various approaches, including analyzing internal model representations (hidden states), leveraging unlabeled data, and employing ensemble methods or smaller, faster models for efficient detection. This is a critical area because accurate and reliable LLM outputs are essential for trustworthy applications across numerous domains, from healthcare and autonomous driving to information retrieval and code generation.
Papers
July 11, 2024
July 9, 2024
July 7, 2024
July 4, 2024
June 24, 2024
June 22, 2024
June 17, 2024
June 14, 2024
June 11, 2024
June 3, 2024
May 29, 2024
May 7, 2024
April 22, 2024
April 10, 2024
April 9, 2024
April 7, 2024
April 4, 2024
Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations
Mahjabin Nahar, Haeseung Seo, Eun-Ju Lee, Aiping Xiong, Dongwon Lee
SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination Detection
Bradley P. Allen, Fina Polat, Paul Groth
April 3, 2024