Content Hallucination
Content hallucination, the generation of factually incorrect or inconsistent information by large language and vision-language models (LLMs and LVLMs), is a significant challenge hindering their reliable deployment. Current research focuses on developing methods to detect and mitigate hallucinations, employing techniques such as hierarchical feedback learning, contrastive decoding, retrieval-augmented generation, and prompt engineering across various model architectures. Addressing this issue is crucial for improving the trustworthiness and safety of these powerful models in diverse applications, ranging from medical diagnosis to financial reporting and beyond. The development of robust benchmarks and evaluation protocols is also a key area of ongoing investigation.
Papers
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Chenxi Wang, Xiang Chen, Ningyu Zhang, Bozhong Tian, Haoming Xu, Shumin Deng, Huajun Chen
Magnifier Prompt: Tackling Multimodal Hallucination via Extremely Simple Instructions
Yuhan Fu, Ruobing Xie, Jiazhen Liu, Bangxiang Lan, Xingwu Sun, Zhanhui Kang, Xirong Li