Level Hallucination

Level hallucination in large language models (LLMs), encompassing the generation of factually incorrect information at various levels (e.g., objects, attributes, relations, events), is a significant challenge hindering their reliable use. Current research focuses on developing methods to detect and mitigate these hallucinations, employing techniques like dynamic retrieval augmentation, confidence-based mitigation strategies, and post-hoc refinement based on source knowledge. These efforts aim to improve the trustworthiness and accuracy of LLMs across diverse applications, from question answering and information retrieval to multimodal tasks involving image and video understanding. The ultimate goal is to build more reliable and faithful LLMs for various applications.

Papers