Uncertainty Calibration
Uncertainty calibration in machine learning aims to ensure that a model's confidence scores accurately reflect its prediction accuracy, preventing overconfidence in incorrect predictions. Current research focuses on developing and evaluating post-hoc calibration methods, often employing techniques like temperature scaling, ensemble methods, and Bayesian neural networks, across diverse model architectures including convolutional neural networks, graph neural networks, and large language models. This is crucial for building reliable and trustworthy AI systems, particularly in safety-critical applications like autonomous driving and medical diagnosis, where accurate uncertainty quantification is paramount for responsible decision-making.
Papers
December 4, 2023
November 23, 2023
October 18, 2023
October 2, 2023
September 1, 2023
June 26, 2023
May 10, 2023
April 17, 2023
April 13, 2023
April 11, 2023
March 23, 2023
February 6, 2023
January 14, 2023
December 27, 2022
November 5, 2022
October 14, 2022
October 6, 2022
July 17, 2022
July 4, 2022