Classification Uncertainty
Classification uncertainty, the quantification of confidence in a model's predictions, is a crucial area of machine learning research aiming to improve the reliability and trustworthiness of AI systems. Current research focuses on disentangling different sources of uncertainty (e.g., aleatoric and epistemic), developing novel architectures like Bayesian neural networks and evidential deep learning models, and employing techniques such as variance-based methods and model souping to enhance uncertainty estimation. These advancements are vital for improving decision-making in high-stakes applications such as medical diagnosis and autonomous systems, where understanding and managing uncertainty is paramount for safe and reliable operation.
Papers
August 22, 2024
June 10, 2024
April 17, 2024
November 19, 2023
September 5, 2023
August 18, 2023
July 3, 2023
May 23, 2023
April 20, 2023
March 25, 2023
February 16, 2023
October 28, 2022
July 15, 2022