Disentangling Confidence Score Distribution
Disentangling confidence score distributions aims to improve the reliability of model predictions by better representing a model's uncertainty. Current research focuses on developing methods to calibrate confidence scores, particularly in large language models and other machine learning applications, using techniques like energy-based learning, adaptive decoding, and incorporating listener awareness. This work is crucial for enhancing the trustworthiness and robustness of AI systems across diverse fields, from medical diagnosis to natural language processing, by enabling more informed decision-making based on accurate assessments of model certainty.
Papers
November 8, 2024
August 26, 2024
June 29, 2024
May 31, 2024
April 29, 2024
April 16, 2024
March 6, 2024
February 28, 2024
November 28, 2023
June 26, 2023
October 17, 2022