Hallucination Annotator
Hallucination annotators are automated systems designed to identify and classify factual inaccuracies ("hallucinations") generated by large language models (LLMs). Current research focuses on developing more accurate and scalable annotators, often employing techniques like self-training frameworks based on the Expectation-Maximization algorithm or leveraging LLMs' internal state transition dynamics for model-based detection. These advancements are crucial for improving the reliability and trustworthiness of LLMs across various applications, ranging from question answering to vision-language tasks, by providing tools for both evaluating and mitigating the hallucination problem.
Papers
July 5, 2024
May 30, 2024
April 6, 2024
January 27, 2024
August 29, 2023