Knowledge Boundary
Knowledge boundary research in large language models (LLMs) focuses on improving their ability to accurately assess and communicate their limitations, thereby reducing the generation of inaccurate or fabricated information ("hallucinations"). Current research investigates methods to enhance LLMs' internal confidence measures, develop effective rejection mechanisms for questions beyond their knowledge scope, and leverage techniques like retrieval augmentation to improve accuracy and reduce overconfidence. This work is crucial for increasing the reliability and trustworthiness of LLMs, impacting their applicability in various fields requiring factual accuracy and responsible information access.
Papers
November 9, 2024
August 19, 2024
June 16, 2024
May 23, 2024
March 28, 2024
March 27, 2024
March 6, 2024
February 18, 2024
November 2, 2023
July 20, 2023
June 30, 2023