Calibration Error
Calibration error quantifies the discrepancy between a model's predicted probabilities and its actual accuracy, aiming to ensure reliable interpretation of model confidence. Current research focuses on developing improved metrics for evaluating calibration, particularly for complex models like large language models and deep neural networks used in image segmentation and object detection, and on designing algorithms to mitigate calibration errors through techniques like temperature scaling and test-time calibration. Addressing calibration error is crucial for deploying trustworthy machine learning systems in high-stakes applications, improving decision-making across diverse fields, and fostering greater confidence in AI's reliability.
Papers
An Elementary Predictor Obtaining $2\sqrt{T}+1$ Distance to Calibration
Eshwar Ram Arunachaleswaran, Natalie Collina, Aaron Roth, Mirah Shi
Don't Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection
Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu