Type II Hallucination
Type II hallucinations in large language models (LLMs) refer to the generation of factually incorrect or fabricated information in response to specific, often object-based, queries, contrasting with more open-ended "Type I" hallucinations. Current research focuses on developing methods to detect and mitigate these hallucinations, employing techniques like retrieval-augmented generation (RAG), fine-tuning, prompt engineering, and multi-agent debate approaches to improve model accuracy and reliability. Understanding and addressing Type II hallucinations is crucial for building trustworthy LLMs, particularly in high-stakes applications like medicine, where factual accuracy is paramount.
Papers
October 22, 2024
October 6, 2024
August 30, 2024
August 25, 2024
August 14, 2024
July 30, 2024
July 25, 2024
July 24, 2024
July 4, 2024
June 25, 2024
May 23, 2024
May 8, 2024
March 29, 2024
March 5, 2024
February 16, 2024
January 11, 2024
January 8, 2024
December 25, 2023
November 28, 2023