Mitigating Hallucination
Hallucination, the generation of factually incorrect information by large language and vision-language models (LLMs and VLMs), is a significant challenge hindering their reliable deployment. Current research focuses on mitigating this issue through various methods, including preemptive detection using internal model representations, data augmentation techniques to create counterfactual examples, and contrastive decoding strategies that re-balance attention to visual and textual inputs. Successfully addressing hallucinations is crucial for building trustworthy AI systems across diverse applications, from question answering and text summarization to medical diagnosis and legal research.
Papers
May 7, 2024
April 22, 2024
April 16, 2024
April 15, 2024
April 9, 2024
March 27, 2024
March 24, 2024
March 15, 2024
March 12, 2024
March 9, 2024
March 3, 2024
March 1, 2024
February 29, 2024
February 27, 2024
February 23, 2024
February 19, 2024
February 16, 2024
February 15, 2024
February 14, 2024