Type II Hallucination
Type II hallucinations in large language models (LLMs) refer to the generation of factually incorrect or fabricated information in response to specific, often object-based, queries, contrasting with more open-ended "Type I" hallucinations. Current research focuses on developing methods to detect and mitigate these hallucinations, employing techniques like retrieval-augmented generation (RAG), fine-tuning, prompt engineering, and multi-agent debate approaches to improve model accuracy and reliability. Understanding and addressing Type II hallucinations is crucial for building trustworthy LLMs, particularly in high-stakes applications like medicine, where factual accuracy is paramount.
Papers
December 25, 2023
November 28, 2023
November 14, 2023
September 20, 2023
September 13, 2023
September 3, 2023
May 19, 2023
January 18, 2023
December 16, 2022
October 24, 2022
August 10, 2022
July 8, 2022