Model Calibration
Model calibration focuses on aligning a machine learning model's predicted probabilities with the actual likelihood of those predictions being correct. Current research emphasizes improving calibration across diverse settings, including federated learning, continual learning, and applications with imbalanced or out-of-distribution data, often employing techniques like temperature scaling, focal loss modifications, and ensemble methods. Achieving well-calibrated models is crucial for building trustworthy AI systems, particularly in high-stakes domains like medical diagnosis and autonomous driving, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
September 13, 2022
September 12, 2022
July 27, 2022
July 18, 2022
May 6, 2022
April 8, 2022
March 25, 2022
March 15, 2022
February 18, 2022