False Premise Hallucination
False premise hallucination refers to the tendency of large language models (LLMs) to generate factually incorrect information when presented with false premises or incomplete information. Current research focuses on developing methods to detect and mitigate these hallucinations, employing techniques such as knowledge graph integration, attention head analysis, and reinforcement learning to improve model accuracy and reliability. This research is crucial for enhancing the trustworthiness and practical applicability of LLMs across various domains, particularly in applications requiring factual accuracy and robust reasoning.
13papers
Papers
February 26, 2025
February 19, 2025
REFIND at SemEval-2025 Task 3: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
DongGeon Lee, Hwanjo YuPohang University of Science and Technology (POSTECH)Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning
Ningke Li, Yahui Song, Kailong Wang, Yuekang Li, Ling Shi, Yi Liu, Haoyu WangHuazhong University of Science and Technology●National University of Singapore●University of New South Wales●Nanyang Technological University
February 5, 2025
February 3, 2025
January 22, 2025
October 9, 2024
October 8, 2024
February 23, 2024
February 8, 2024