Model Calibration
Model calibration focuses on aligning a machine learning model's predicted probabilities with the actual likelihood of those predictions being correct. Current research emphasizes improving calibration across diverse settings, including federated learning, continual learning, and applications with imbalanced or out-of-distribution data, often employing techniques like temperature scaling, focal loss modifications, and ensemble methods. Achieving well-calibrated models is crucial for building trustworthy AI systems, particularly in high-stakes domains like medical diagnosis and autonomous driving, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
September 14, 2023
September 6, 2023
August 16, 2023
August 2, 2023
July 25, 2023
July 19, 2023
July 5, 2023
May 11, 2023
March 27, 2023
March 9, 2023
December 27, 2022
December 20, 2022
December 1, 2022
November 29, 2022
November 21, 2022
November 14, 2022
October 20, 2022
October 13, 2022
September 30, 2022
September 13, 2022