Reliable Uncertainty

Reliable uncertainty quantification in machine learning aims to provide trustworthy estimates of prediction confidence, crucial for deploying models in high-stakes applications like autonomous driving and medical diagnosis. Current research focuses on developing methods that accurately capture both aleatoric (data-inherent) and epistemic (model-related) uncertainty, employing techniques like Bayesian neural networks, deep ensembles, evidential deep learning, and conformal prediction, often within specific model architectures such as DeepONets. These advancements are vital for improving the safety and reliability of AI systems across diverse scientific fields and practical applications, enabling more informed decision-making in situations where prediction uncertainty is critical.

Papers