Score Calibration

Score calibration aims to improve the reliability and interpretability of prediction scores from machine learning models, ensuring that predicted probabilities accurately reflect the true likelihood of an event. Current research focuses on developing and applying calibration techniques across diverse applications, including speaker verification, entity matching, and relation extraction, often employing methods like logistic regression, neural networks (e.g., magnitude estimation networks), and Wasserstein barycenters. Effective score calibration is crucial for enhancing the fairness, accuracy, and trustworthiness of machine learning systems in high-stakes decision-making scenarios, impacting fields ranging from biometrics to data cleaning.

Papers