Global Calibration
Global calibration in machine learning aims to ensure that a model's predicted probabilities accurately reflect its actual performance; a 90% confidence prediction should be correct 90% of the time. Current research focuses on developing novel calibration metrics and algorithms, particularly for large language models and federated learning settings, often employing techniques like parameterized scalers and contrastive loss functions to improve calibration without sacrificing accuracy. Achieving well-calibrated models is crucial for building trustworthy AI systems across diverse applications, from medical diagnosis to autonomous vehicles, by providing reliable uncertainty estimates and improving decision-making.
Papers
November 6, 2024
June 17, 2024
May 24, 2024
November 3, 2023
November 2, 2023
September 29, 2023
June 11, 2023
June 1, 2023
May 11, 2023
April 24, 2023
August 24, 2022
June 12, 2022
May 25, 2022