LLM Uncertainty

Large language model (LLM) uncertainty quantification focuses on developing methods to reliably assess the confidence of LLMs in their predictions, aiming to improve trust and reliability in their applications. Current research explores various approaches, including analyzing response consistency across multiple prompts or sampling methods, leveraging internal model features (where accessible), and designing novel prompting strategies to elicit self-reported confidence. This work is crucial for responsible deployment of LLMs in high-stakes domains like healthcare and scientific research, where understanding and mitigating model uncertainty is paramount for accurate and trustworthy results.

Papers