False Premise Hallucination
False premise hallucination refers to the tendency of large language models (LLMs) to generate factually incorrect information when presented with false premises or incomplete information. Current research focuses on developing methods to detect and mitigate these hallucinations, employing techniques such as knowledge graph integration, attention head analysis, and reinforcement learning to improve model accuracy and reliability. This research is crucial for enhancing the trustworthiness and practical applicability of LLMs across various domains, particularly in applications requiring factual accuracy and robust reasoning.
Papers
October 25, 2024
October 9, 2024
October 8, 2024
July 8, 2024
May 9, 2024
March 27, 2024
February 29, 2024
February 23, 2024
February 8, 2024
January 27, 2024
January 6, 2024
November 22, 2023
October 18, 2023