Model Confidence
Model confidence, the degree to which a model believes its own predictions are correct, is crucial for reliable deployment of machine learning systems, particularly in high-stakes applications. Current research focuses on improving the calibration of model confidence—ensuring that stated confidence accurately reflects prediction accuracy—across various architectures, including large language models (LLMs) and convolutional neural networks, often employing techniques like label smoothing, self-consistency, and novel confidence estimation methods. This work is vital for enhancing trust in AI systems and enabling more effective human-AI collaboration by providing users with a clearer understanding of a model's reliability and limitations.
Papers
June 19, 2023
May 15, 2023
March 10, 2023
February 23, 2023
January 30, 2023
January 29, 2023
January 21, 2023
November 14, 2022
November 6, 2022
July 20, 2022
June 6, 2022
January 11, 2022
December 17, 2021