Calibrated Learning
Calibrated learning aims to improve the reliability and trustworthiness of machine learning models by ensuring their predicted probabilities accurately reflect the true likelihood of outcomes. Current research focuses on developing methods to achieve better calibration, particularly in challenging scenarios like federated learning, long-tailed data distributions, and multi-task/multi-modality settings, often employing techniques like Bayesian methods, ensemble learning, and novel loss functions. These advancements are crucial for deploying AI systems in high-stakes applications where accurate uncertainty quantification is paramount, improving decision-making in areas such as autonomous driving, healthcare, and industrial automation.
Papers
May 9, 2024
April 17, 2024
September 26, 2023
July 31, 2023
December 7, 2022
October 13, 2022
July 25, 2022
June 10, 2022
April 18, 2022
February 8, 2022
November 24, 2021