Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers
Tiny Deep Ensemble: Uncertainty Estimation in Edge AI Accelerators via Ensembling Normalization Layers with Shared Weights
Soyed Tuhin Ahmed, Michael Hefenbrock, Mehdi B. Tahoori
Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation
Pei Liu, Luping Ji
Intelligent Cardiac Auscultation for Murmur Detection via Parallel-Attentive Models with Uncertainty Estimation
Zixing Zhang, Tao Pang, Jing Han, Björn W. Schuller
Enabling Uncertainty Estimation in Iterative Neural Networks
Nikita Durasov, Doruk Oner, Jonathan Donier, Hieu Le, Pascal Fua
EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation
Kudaibergen Abutalip, Numan Saeed, Ikboljon Sobirov, Vincent Andrearczyk, Adrien Depeursinge, Mohammad Yaqub