LLM Hallucination
Large language model (LLM) hallucinations, the generation of factually incorrect or nonsensical outputs, pose a significant challenge to their reliable deployment. Current research focuses on understanding the underlying causes of these hallucinations, including analyzing internal model representations and investigating the influence of training data and prompt characteristics, and developing methods for detection and mitigation, such as those leveraging retrieval augmented generation (RAG) or epistemic neural networks. Addressing LLM hallucinations is crucial for improving the trustworthiness and safety of these powerful models across diverse applications, from healthcare to code generation.
Papers
November 15, 2024
November 14, 2024
October 29, 2024
October 25, 2024
October 17, 2024
October 3, 2024
September 30, 2024
September 29, 2024
September 21, 2024
August 19, 2024
July 18, 2024
July 5, 2024
July 3, 2024
June 25, 2024
June 20, 2024
June 16, 2024
May 24, 2024
April 4, 2024
March 26, 2024