Mitigating Hallucination
Hallucination, the generation of factually incorrect information by large language and vision-language models (LLMs and VLMs), is a significant challenge hindering their reliable deployment. Current research focuses on mitigating this issue through various methods, including preemptive detection using internal model representations, data augmentation techniques to create counterfactual examples, and contrastive decoding strategies that re-balance attention to visual and textual inputs. Successfully addressing hallucinations is crucial for building trustworthy AI systems across diverse applications, from question answering and text summarization to medical diagnosis and legal research.
Papers
Genetic Approach to Mitigate Hallucination in Generative IR
Hrishikesh Kulkarni, Nazli Goharian, Ophir Frieder, Sean MacAvaney
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Yeji Park, Deokyeong Lee, Junsuk Choe, Buru Chang
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models
Duy Khoa Pham, Bao Quoc Vo