Hallucination Detection
Hallucination detection in large language models (LLMs) focuses on identifying instances where models generate plausible-sounding but factually incorrect information. Current research explores various approaches, including analyzing internal model representations (hidden states), leveraging unlabeled data, and employing ensemble methods or smaller, faster models for efficient detection. This is a critical area because accurate and reliable LLM outputs are essential for trustworthy applications across numerous domains, from healthcare and autonomous driving to information retrieval and code generation.
Papers
April 1, 2024
March 28, 2024
March 25, 2024
March 22, 2024
March 18, 2024
March 11, 2024
March 6, 2024
March 5, 2024
March 1, 2024
February 25, 2024
February 23, 2024
February 20, 2024
Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation
Anas Himmi, Guillaume Staerman, Marine Picot, Pierre Colombo, Nuno M. Guerreiro
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data
Chengcheng Wei, Ze Chen, Songtan Fang, Jiarong He, Max Gao
February 18, 2024
February 16, 2024
February 6, 2024
February 5, 2024
January 18, 2024
January 16, 2024
January 6, 2024