Confidence Estimation Model
Confidence estimation models aim to quantify the uncertainty associated with predictions from machine learning models, particularly large language models and automatic speech recognition systems, improving trust and reliability. Current research focuses on developing methods that go beyond simple probability outputs, exploring techniques like surrogate models, prompt engineering, and novel loss functions to achieve better calibration and generalization across diverse datasets and model architectures. These advancements are crucial for responsible deployment of AI systems in various applications, enabling more informed decision-making and enhancing user trust.
Papers
June 1, 2024
January 6, 2024
December 3, 2023
November 15, 2023
September 11, 2023
May 18, 2023
March 5, 2023