Content Hallucination

Content hallucination, the generation of factually incorrect or inconsistent information by large language and vision-language models (LLMs and LVLMs), is a significant challenge hindering their reliable deployment. Current research focuses on developing methods to detect and mitigate hallucinations, employing techniques such as hierarchical feedback learning, contrastive decoding, retrieval-augmented generation, and prompt engineering across various model architectures. Addressing this issue is crucial for improving the trustworthiness and safety of these powerful models in diverse applications, ranging from medical diagnosis to financial reporting and beyond. The development of robust benchmarks and evaluation protocols is also a key area of ongoing investigation.

Papers