Mitigating Hallucination
Hallucination, the generation of factually incorrect information by large language and vision-language models (LLMs and VLMs), is a significant challenge hindering their reliable deployment. Current research focuses on mitigating this issue through various methods, including preemptive detection using internal model representations, data augmentation techniques to create counterfactual examples, and contrastive decoding strategies that re-balance attention to visual and textual inputs. Successfully addressing hallucinations is crucial for building trustworthy AI systems across diverse applications, from question answering and text summarization to medical diagnosis and legal research.
Papers
A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs
Lehan He, Zeren Chen, Zhelun Shi, Tianyu Yu, Jing Shao, Lu Sheng
Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach
Shijian Deng, Wentian Zhao, Yu-Jhe Li, Kun Wan, Daniel Miranda, Ajinkya Kale, Yapeng Tian
Mitigating Object Hallucination via Concentric Causal Attention
Yun Xing, Yiheng Li, Ivan Laptev, Shijian Lu
Mitigating Hallucinations of Large Language Models in Medical Information Extraction via Contrastive Decoding
Derong Xu, Ziheng Zhang, Zhihong Zhu, Zhenxi Lin, Qidong Liu, Xian Wu, Tong Xu, Xiangyu Zhao, Yefeng Zheng, Enhong Chen
Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided Decoding
Kyungmin Min, Minbeom Kim, Kang-il Lee, Dongryeol Lee, Kyomin Jung
FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMs
Forrest Sheng Bao, Miaoran Li, Renyi Qu, Ge Luo, Erana Wan, Yujia Tang, Weisi Fan, Manveer Singh Tamber, Suleman Kazi, Vivek Sourabh, Mike Qi, Ruixuan Tu, Chenyu Xu, Matthew Gonzales, Ofer Mendelevitch, Amin Ahmad