Distribution Calibration
Distribution calibration in machine learning aims to improve the reliability and accuracy of model predictions by correcting mismatches between predicted probabilities and actual outcomes. Current research focuses on addressing calibration challenges in various contexts, including active learning, noisy data, few-shot learning, and out-of-distribution generalization, often employing techniques like ensemble methods, Bayesian approaches, and optimal transport. These advancements are crucial for enhancing the trustworthiness and robustness of machine learning models across diverse applications, particularly in safety-critical domains where reliable uncertainty quantification is paramount.
Papers
November 13, 2023
August 23, 2023
May 19, 2023
March 9, 2023
February 28, 2023
February 13, 2023
October 11, 2022
October 9, 2022
October 1, 2022
September 14, 2022
July 20, 2022
July 18, 2022
June 28, 2022
January 31, 2022
December 14, 2021