Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers
Improving Out-of-Distribution Detection via Epistemic Uncertainty Adversarial Training
Derek Everett, Andre T. Nguyen, Luke E. Richards, Edward Raff
A Robust Learning Methodology for Uncertainty-aware Scientific Machine Learning models
Erbet Costa Almeida, Carine de Menezes Rebello, Marcio Fontana, Leizer Schnitman, Idelfonso Bessa dos Reis Nogueira
Single Model Uncertainty Estimation via Stochastic Data Centering
Jayaraman J. Thiagarajan, Rushil Anirudh, Vivek Narayanaswamy, Peer-Timo Bremer
BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks
Uddeshya Upadhyay, Shyamgopal Karthik, Yanbei Chen, Massimiliano Mancini, Zeynep Akata