Uncertainty Calibration
Uncertainty calibration in machine learning aims to ensure that a model's confidence scores accurately reflect its prediction accuracy, preventing overconfidence in incorrect predictions. Current research focuses on developing and evaluating post-hoc calibration methods, often employing techniques like temperature scaling, ensemble methods, and Bayesian neural networks, across diverse model architectures including convolutional neural networks, graph neural networks, and large language models. This is crucial for building reliable and trustworthy AI systems, particularly in safety-critical applications like autonomous driving and medical diagnosis, where accurate uncertainty quantification is paramount for responsible decision-making.
Papers
October 16, 2024
September 26, 2024
September 13, 2024
July 17, 2024
July 15, 2024
June 4, 2024
May 31, 2024
May 30, 2024
May 28, 2024
May 22, 2024
April 24, 2024
April 15, 2024
March 4, 2024
February 4, 2024
January 16, 2024
January 12, 2024
December 14, 2023
December 11, 2023