Content Hallucination
Content hallucination, the generation of factually incorrect or inconsistent information by large language and vision-language models (LLMs and LVLMs), is a significant challenge hindering their reliable deployment. Current research focuses on developing methods to detect and mitigate hallucinations, employing techniques such as hierarchical feedback learning, contrastive decoding, retrieval-augmented generation, and prompt engineering across various model architectures. Addressing this issue is crucial for improving the trustworthiness and safety of these powerful models in diverse applications, ranging from medical diagnosis to financial reporting and beyond. The development of robust benchmarks and evaluation protocols is also a key area of ongoing investigation.
Papers
Multi-Modal Hallucination Control by Visual Information Grounding
Alessandro Favero, Luca Zancato, Matthew Trager, Siddharth Choudhary, Pramuditha Perera, Alessandro Achille, Ashwin Swaminathan, Stefano Soatto
What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models
Junho Kim, Yeon Ju Kim, Yong Man Ro
Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation
Elijah Berberette, Jack Hutchins, Amir Sadovnik
A Survey on Hallucination in Large Vision-Language Models
Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, Wei Peng