Epistemic Uncertainty Quantification

Epistemic uncertainty quantification (EUQ) aims to measure a model's confidence in its predictions, distinguishing it from aleatoric uncertainty (inherent randomness in the data). Current research focuses on improving EUQ methods for various model architectures, including Bayesian neural networks and pre-trained models, often employing gradient-based approaches or techniques like evidential deep learning to estimate uncertainty. Accurate EUQ is crucial for building trustworthy AI systems, particularly in high-stakes applications like healthcare and autonomous driving, by enabling better decision-making and improved out-of-distribution detection.

Papers