Reducing Hallucination
Large language models (LLMs) are prone to "hallucinations"—generating factually incorrect or nonsensical information. Current research focuses on mitigating this issue through various techniques, including grounding LLM responses in external knowledge bases (e.g., using Retrieval Augmented Generation), improving uncertainty estimation within the models themselves, and leveraging contrastive learning or multi-agent debate frameworks to refine model outputs. Successfully reducing hallucinations is crucial for increasing the reliability and trustworthiness of LLMs across diverse applications, from question answering and summarization to multimodal tasks involving image and text.
Papers
October 23, 2024
October 21, 2024
October 20, 2024
October 13, 2024
September 30, 2024
September 10, 2024
August 27, 2024
August 19, 2024
August 2, 2024
June 17, 2024
May 24, 2024
February 29, 2024
January 3, 2024
December 25, 2023
December 21, 2023
October 16, 2023
September 28, 2023