LLM Confidence
Large language model (LLM) confidence research focuses on improving the reliability of LLMs by accurately assessing and communicating the likelihood of their predictions being correct. Current efforts concentrate on developing methods to better capture inherent LLM uncertainty, train models to express confidence appropriately, and calibrate this expressed confidence to match human perception, often employing techniques like learning from past experience and prompt engineering with retrieval augmentation. This work is crucial for building trustworthy AI systems, particularly in high-stakes applications where understanding the reliability of AI-generated information is paramount.
Papers
November 7, 2024
October 16, 2024
April 16, 2024
February 21, 2024
January 24, 2024
November 14, 2023
November 8, 2023
September 11, 2023