LLM Uncertainty
Large language model (LLM) uncertainty quantification focuses on developing methods to reliably assess the confidence of LLMs in their predictions, aiming to improve trust and reliability in their applications. Current research explores various approaches, including analyzing response consistency across multiple prompts or sampling methods, leveraging internal model features (where accessible), and designing novel prompting strategies to elicit self-reported confidence. This work is crucial for responsible deployment of LLMs in high-stakes domains like healthcare and scientific research, where understanding and mitigating model uncertainty is paramount for accurate and trustworthy results.
Papers
October 16, 2024
August 27, 2024
August 17, 2024
July 8, 2024
July 1, 2024
June 27, 2024
June 12, 2024
June 5, 2024
April 10, 2024
November 28, 2023
November 26, 2023
June 22, 2023