Uncertainty Calibration
Uncertainty calibration in machine learning aims to ensure that a model's confidence scores accurately reflect its prediction accuracy, preventing overconfidence in incorrect predictions. Current research focuses on developing and evaluating post-hoc calibration methods, often employing techniques like temperature scaling, ensemble methods, and Bayesian neural networks, across diverse model architectures including convolutional neural networks, graph neural networks, and large language models. This is crucial for building reliable and trustworthy AI systems, particularly in safety-critical applications like autonomous driving and medical diagnosis, where accurate uncertainty quantification is paramount for responsible decision-making.
Papers
June 15, 2022
May 15, 2022
May 4, 2022
March 15, 2022
February 24, 2022
January 4, 2022
November 23, 2021