Epistemic Uncertainty
Epistemic uncertainty, representing a model's lack of knowledge rather than inherent data noise (aleatoric uncertainty), is a crucial area of research in machine learning, aiming to improve the reliability and trustworthiness of AI systems. Current efforts focus on developing methods to quantify and calibrate epistemic uncertainty within various model architectures, including Bayesian neural networks, deep ensembles, and energy-based models, often applied to tasks like out-of-distribution detection and safe reinforcement learning. Accurate epistemic uncertainty quantification is vital for building robust and reliable AI systems across diverse applications, from medical diagnosis to autonomous driving, by enabling models to identify their limitations and avoid overconfident predictions in uncertain situations. The field is actively addressing challenges such as disentangling aleatoric and epistemic uncertainty and mitigating the "epistemic uncertainty collapse" observed in large models.
Papers
Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions
Helena Löfström, Tuwe Löfström, Johan Hallberg Szabadvary
PH-Dropout: Prctical Epistemic Uncertainty Quantification for View Synthesis
Chuanhao Sun, Thanos Triantafyllou, Anthos Makris, Maja Drmač, Kai Xu, Luo Mai, Mahesh K. Marina