Quantified Uncertainty

Quantified uncertainty focuses on developing methods to estimate and represent the reliability of machine learning model predictions, moving beyond simple point estimates to incorporate uncertainty measures. Current research emphasizes disentangling different types of uncertainty (e.g., aleatoric and epistemic) and tailoring uncertainty estimation techniques to specific tasks, often employing Bayesian methods, neural networks (including iterative architectures and ensembles), and probabilistic programming. This work is crucial for building trustworthy AI systems, improving the reliability of scientific discoveries based on data-driven models, and enabling safe deployment of AI in high-stakes applications like robotics and autonomous systems.

Papers