Model Hallucination

Model hallucination, the generation of factually incorrect or nonsensical outputs by large language models (LLMs) and other AI systems, is a significant challenge hindering their reliable deployment. Current research focuses on developing methods to detect and mitigate hallucinations, employing techniques like retrieval-augmented generation (RAG), contrastive decoding, and targeted instruction tuning across various model architectures, including LLMs and large vision-language models (LVLMs). These efforts aim to improve the accuracy and trustworthiness of AI systems, impacting diverse applications from medical report generation to virtual try-on technologies. The development of robust hallucination detection and mitigation strategies is crucial for building reliable and responsible AI systems.

Papers