Calibration Error
Calibration error quantifies the discrepancy between a model's predicted probabilities and its actual accuracy, aiming to ensure reliable interpretation of model confidence. Current research focuses on developing improved metrics for evaluating calibration, particularly for complex models like large language models and deep neural networks used in image segmentation and object detection, and on designing algorithms to mitigate calibration errors through techniques like temperature scaling and test-time calibration. Addressing calibration error is crucial for deploying trustworthy machine learning systems in high-stakes applications, improving decision-making across diverse fields, and fostering greater confidence in AI's reliability.
Papers
Consistent and Asymptotically Unbiased Estimation of Proper Calibration Errors
Teodora Popordanoska, Sebastian G. Gruber, Aleksei Tiulpin, Florian Buettner, Matthew B. Blaschko
Estimating calibration error under label shift without labels
Teodora Popordanoska, Gorjan Radevski, Tinne Tuytelaars, Matthew B. Blaschko