LLM Hallucination
Large language model (LLM) hallucinations, the generation of factually incorrect or nonsensical outputs, pose a significant challenge to their reliable deployment. Current research focuses on understanding the underlying causes of these hallucinations, including analyzing internal model representations and investigating the influence of training data and prompt characteristics, and developing methods for detection and mitigation, such as those leveraging retrieval augmented generation (RAG) or epistemic neural networks. Addressing LLM hallucinations is crucial for improving the trustworthiness and safety of these powerful models across diverse applications, from healthcare to code generation.
Papers
January 14, 2025
January 10, 2025
December 15, 2024
December 13, 2024
December 10, 2024
November 18, 2024
November 15, 2024
November 14, 2024
October 29, 2024
October 25, 2024
October 17, 2024
October 3, 2024
September 30, 2024
September 29, 2024
September 21, 2024
August 19, 2024
July 18, 2024
July 5, 2024