Calibration Performance
Calibration performance, the alignment of predicted probabilities with observed frequencies, is crucial for reliable machine learning models across diverse applications. Current research focuses on improving calibration in various model types, including deep neural networks, large language models, and Gaussian processes, employing techniques like temperature scaling, isotonic regression, and novel loss functions designed to directly optimize calibration metrics such as Expected Calibration Error (ECE). These advancements are vital for ensuring trustworthy predictions in high-stakes domains like medical diagnosis, autonomous driving, and climate forecasting, where accurate uncertainty quantification is paramount. Furthermore, research is exploring the relationship between calibration and other desirable model properties, such as robustness and generalization.
Papers
Self-Consistency Boosts Calibration for Math Reasoning
Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Lifeng Jin, Haitao Mi, Jinsong Su, Dong Yu
Transferring BCI models from calibration to control: Observing shifts in EEG features
Ivo Pascal de Jong, Lüke Luna van den Wittenboer, Matias Valdenegro-Toro, Andreea Ioana Sburlea