Confidence Level
Confidence levels in large language models (LLMs) and other AI systems are a critical area of research, focusing on how accurately these models assess and communicate their certainty in predictions. Current work investigates methods for improving the alignment between a model's internal confidence (e.g., based on token probabilities) and its externally expressed confidence, as well as how human users perceive and interpret this communicated confidence. This research is crucial for building trust in AI systems and ensuring their reliable application in various fields, particularly those with high stakes, such as healthcare and remote sensing, where accurate confidence assessments are paramount for safe and effective deployment.
Papers
July 8, 2024
June 26, 2024
May 25, 2024
February 15, 2024