LLM Hallucination

Large language model (LLM) hallucinations, the generation of factually incorrect or nonsensical outputs, pose a significant challenge to their reliable deployment. Current research focuses on understanding the underlying causes of these hallucinations, including analyzing internal model representations and investigating the influence of training data and prompt characteristics, and developing methods for detection and mitigation, such as those leveraging retrieval augmented generation (RAG) or epistemic neural networks. Addressing LLM hallucinations is crucial for improving the trustworthiness and safety of these powerful models across diverse applications, from healthcare to code generation.

Papers