Mitigating Hallucination
Hallucination, the generation of factually incorrect information by large language and vision-language models (LLMs and VLMs), is a significant challenge hindering their reliable deployment. Current research focuses on mitigating this issue through various methods, including preemptive detection using internal model representations, data augmentation techniques to create counterfactual examples, and contrastive decoding strategies that re-balance attention to visual and textual inputs. Successfully addressing hallucinations is crucial for building trustworthy AI systems across diverse applications, from question answering and text summarization to medical diagnosis and legal research.
Papers
Confidence-Aware Sub-Structure Beam Search (CABS): Mitigating Hallucination in Structured Data Generation with Large Language Models
Chengwei Wei, Kee Kiat Koo, Amir Tavanaei, Karim Bouyarmane
Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools
Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, Christopher D. Manning, Daniel E. Ho
Data-augmented phrase-level alignment for mitigating object hallucination
Pritam Sarkar, Sayna Ebrahimi, Ali Etemad, Ahmad Beirami, Sercan Ö. Arık, Tomas Pfister
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs
Sangmin Woo, Jaehyuk Jang, Donguk Kim, Yubin Choi, Changick Kim