Predictive Uncertainty Quantification

Predictive uncertainty quantification (PUQ) aims to estimate the reliability of machine learning model predictions, crucial for trustworthy decision-making in various applications. Current research focuses on improving PUQ methods for deep learning models, particularly exploring Bayesian neural networks, evidential deep learning, and adversarial approaches to better capture both aleatoric (data-inherent) and epistemic (model-inherent) uncertainty. These advancements are vital for enhancing the robustness and explainability of AI systems across domains like autonomous driving and natural language processing, where reliable uncertainty estimates are essential for safe and effective deployment.

Papers