Calibration Performance
Calibration performance, the alignment of predicted probabilities with observed frequencies, is crucial for reliable machine learning models across diverse applications. Current research focuses on improving calibration in various model types, including deep neural networks, large language models, and Gaussian processes, employing techniques like temperature scaling, isotonic regression, and novel loss functions designed to directly optimize calibration metrics such as Expected Calibration Error (ECE). These advancements are vital for ensuring trustworthy predictions in high-stakes domains like medical diagnosis, autonomous driving, and climate forecasting, where accurate uncertainty quantification is paramount. Furthermore, research is exploring the relationship between calibration and other desirable model properties, such as robustness and generalization.
Papers
Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity
Dennis Ulmer, Jes Frellsen, Christian Hardmeier
Autoencoded sparse Bayesian in-IRT factorization, calibration, and amortized inference for the Work Disability Functional Assessment Battery
Joshua C. Chang, Carson C. Chow, Julia Porcino
Can Calibration Improve Sample Prioritization?
Ganesh Tata, Gautham Krishna Gudur, Gopinath Chennupati, Mohammad Emtiyaz Khan
Calibration and Uncertainty Characterization for Ultra-Wideband Two-Way-Ranging Measurements
Mohammed Ayman Shalaby, Charles Champagne Cossette, James Richard Forbes, Jerome Le Ny