Model Calibration
Model calibration focuses on aligning a machine learning model's predicted probabilities with the actual likelihood of those predictions being correct. Current research emphasizes improving calibration across diverse settings, including federated learning, continual learning, and applications with imbalanced or out-of-distribution data, often employing techniques like temperature scaling, focal loss modifications, and ensemble methods. Achieving well-calibrated models is crucial for building trustworthy AI systems, particularly in high-stakes domains like medical diagnosis and autonomous driving, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
October 22, 2024
October 7, 2024
October 2, 2024
September 19, 2024
September 7, 2024
July 22, 2024
July 2, 2024
July 1, 2024
June 17, 2024
June 4, 2024
May 1, 2024
April 19, 2024
April 11, 2024
December 21, 2023
December 14, 2023
October 26, 2023
October 16, 2023