Mitigating Hallucination
Hallucination, the generation of factually incorrect information by large language and vision-language models (LLMs and VLMs), is a significant challenge hindering their reliable deployment. Current research focuses on mitigating this issue through various methods, including preemptive detection using internal model representations, data augmentation techniques to create counterfactual examples, and contrastive decoding strategies that re-balance attention to visual and textual inputs. Successfully addressing hallucinations is crucial for building trustworthy AI systems across diverse applications, from question answering and text summarization to medical diagnosis and legal research.
Papers
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Xin Zou, Yizhou Wang, Yibo Yan, Sirui Huang, Kening Zheng, Junkai Chen, Chang Tang, Xuming Hu
Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
Yufang Liu, Tao Ji, Changzhi Sun, Yuanbin Wu, Aimin Zhou
RoleBreak: Character Hallucination as a Jailbreak Attack in Role-Playing Systems
Yihong Tang, Bo Wang, Xu Wang, Dongming Zhao, Jing Liu, Jijun Zhang, Ruifang He, Yuexian Hou
Pre-trained Language Models Return Distinguishable Probability Distributions to Unfaithfully Hallucinated Texts
Taehun Cha, Donghun Lee
Genetic Approach to Mitigate Hallucination in Generative IR
Hrishikesh Kulkarni, Nazli Goharian, Ophir Frieder, Sean MacAvaney
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Yeji Park, Deokyeong Lee, Junsuk Choe, Buru Chang
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models
Duy Khoa Pham, Bao Quoc Vo