Calibration Performance
Calibration performance, the alignment of predicted probabilities with observed frequencies, is crucial for reliable machine learning models across diverse applications. Current research focuses on improving calibration in various model types, including deep neural networks, large language models, and Gaussian processes, employing techniques like temperature scaling, isotonic regression, and novel loss functions designed to directly optimize calibration metrics such as Expected Calibration Error (ECE). These advancements are vital for ensuring trustworthy predictions in high-stakes domains like medical diagnosis, autonomous driving, and climate forecasting, where accurate uncertainty quantification is paramount. Furthermore, research is exploring the relationship between calibration and other desirable model properties, such as robustness and generalization.
Papers
What is Your Metric Telling You? Evaluating Classifier Calibration under Context-Specific Definitions of Reliability
John Kirchenbauer, Jacob Oaks, Eric Heim
Feature-Distribution Perturbation and Calibration for Generalized Person ReID
Qilei Li, Jiabo Huang, Jian Hu, Shaogang Gong
Calibrate and Refine! A Novel and Agile Framework for ASR-error Robust Intent Detection
Peilin Zhou, Dading Chong, Helin Wang, Qingcheng Zeng