Content Hallucination
Content hallucination, the generation of factually incorrect or inconsistent information by large language and vision-language models (LLMs and LVLMs), is a significant challenge hindering their reliable deployment. Current research focuses on developing methods to detect and mitigate hallucinations, employing techniques such as hierarchical feedback learning, contrastive decoding, retrieval-augmented generation, and prompt engineering across various model architectures. Addressing this issue is crucial for improving the trustworthiness and safety of these powerful models in diverse applications, ranging from medical diagnosis to financial reporting and beyond. The development of robust benchmarks and evaluation protocols is also a key area of ongoing investigation.
Papers
HaloQuest: A Visual Hallucination Dataset for Advancing Multimodal Reasoning
Zhecan Wang, Garrett Bingham, Adams Yu, Quoc Le, Thang Luong, Golnaz Ghiasi
Developing a Reliable, General-Purpose Hallucination Detection and Mitigation Service: Insights and Lessons Learned
Song Wang, Xun Wang, Jie Mei, Yujia Xie, Sean Muarray, Zhang Li, Lingfeng Wu, Si-Qing Chen, Wayne Xiong