Model Confidence
Model confidence, the degree to which a model believes its own predictions are correct, is crucial for reliable deployment of machine learning systems, particularly in high-stakes applications. Current research focuses on improving the calibration of model confidence—ensuring that stated confidence accurately reflects prediction accuracy—across various architectures, including large language models (LLMs) and convolutional neural networks, often employing techniques like label smoothing, self-consistency, and novel confidence estimation methods. This work is vital for enhancing trust in AI systems and enabling more effective human-AI collaboration by providing users with a clearer understanding of a model's reliability and limitations.
Papers
October 31, 2024
October 30, 2024
October 20, 2024
August 31, 2024
July 11, 2024
June 11, 2024
May 30, 2024
May 26, 2024
May 1, 2024
March 14, 2024
February 25, 2024
February 5, 2024
January 24, 2024
January 8, 2024
January 5, 2024
December 21, 2023
November 28, 2023
September 14, 2023
August 9, 2023