LLM Hallucination
Large language model (LLM) hallucinations, the generation of factually incorrect or nonsensical outputs, pose a significant challenge to their reliable deployment. Current research focuses on understanding the underlying causes of these hallucinations, including analyzing internal model representations and investigating the influence of training data and prompt characteristics, and developing methods for detection and mitigation, such as those leveraging retrieval augmented generation (RAG) or epistemic neural networks. Addressing LLM hallucinations is crucial for improving the trustworthiness and safety of these powerful models across diverse applications, from healthcare to code generation.
Papers
March 6, 2024
February 16, 2024
February 1, 2024
January 6, 2024
December 25, 2023
November 9, 2023
October 24, 2023
October 23, 2023
October 1, 2023
September 20, 2023
September 3, 2023