Hallucination Detection
Hallucination detection in large language models (LLMs) focuses on identifying instances where models generate plausible-sounding but factually incorrect information. Current research explores various approaches, including analyzing internal model representations (hidden states), leveraging unlabeled data, and employing ensemble methods or smaller, faster models for efficient detection. This is a critical area because accurate and reliable LLM outputs are essential for trustworthy applications across numerous domains, from healthcare and autonomous driving to information retrieval and code generation.
Papers
October 16, 2023
October 13, 2023
October 10, 2023
October 6, 2023
September 30, 2023
September 6, 2023
August 11, 2023
March 15, 2023
February 12, 2023
January 18, 2023
January 11, 2023