Model Calibration
Model calibration focuses on aligning a machine learning model's predicted probabilities with the actual likelihood of those predictions being correct. Current research emphasizes improving calibration across diverse settings, including federated learning, continual learning, and applications with imbalanced or out-of-distribution data, often employing techniques like temperature scaling, focal loss modifications, and ensemble methods. Achieving well-calibrated models is crucial for building trustworthy AI systems, particularly in high-stakes domains like medical diagnosis and autonomous driving, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
December 17, 2024
December 10, 2024
December 1, 2024
November 20, 2024
November 9, 2024
October 22, 2024
October 7, 2024
October 2, 2024
September 19, 2024
September 7, 2024
July 22, 2024
July 2, 2024
July 1, 2024
June 17, 2024
June 4, 2024
May 1, 2024
April 19, 2024
April 11, 2024