Hallucination Detection
Hallucination detection in large language models (LLMs) focuses on identifying instances where models generate plausible-sounding but factually incorrect information. Current research explores various approaches, including analyzing internal model representations (hidden states), leveraging unlabeled data, and employing ensemble methods or smaller, faster models for efficient detection. This is a critical area because accurate and reliable LLM outputs are essential for trustworthy applications across numerous domains, from healthcare and autonomous driving to information retrieval and code generation.
Papers
February 23, 2024
February 20, 2024
Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation
Anas Himmi, Guillaume Staerman, Marine Picot, Pierre Colombo, Nuno M. Guerreiro
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data
Chengcheng Wei, Ze Chen, Songtan Fang, Jiarong He, Max Gao
February 18, 2024
February 16, 2024
February 6, 2024
February 5, 2024
January 18, 2024
January 16, 2024
January 6, 2024
December 31, 2023
December 19, 2023
December 8, 2023
November 22, 2023
November 9, 2023
November 3, 2023
October 23, 2023
October 22, 2023
October 18, 2023
October 16, 2023