Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers
Quantification of Predictive Uncertainty via Inference-Time Sampling
Katarína Tóthová, Ľubor Ladický, Daniel Thul, Marc Pollefeys, Ender Konukoglu
Joint Out-of-Distribution Detection and Uncertainty Estimation for Trajectory Prediction
Julian Wiederer, Julian Schmidt, Ulrich Kressel, Klaus Dietmayer, Vasileios Belagiannis
Uncertainty analysis for accurate ground truth trajectories with robotic total stations
Maxime Vaidis, William Dubois, Effie Daum, Damien LaRocque, François Pomerleau
Uncertainty Estimation for Molecules: Desiderata and Methods
Tom Wollschläger, Nicholas Gao, Bertrand Charpentier, Mohamed Amine Ketata, Stephan Günnemann
Unfolding Framework with Prior of Convolution-Transformer Mixture and Uncertainty Estimation for Video Snapshot Compressive Imaging
Siming Zheng, Xin Yuan
Estimating Uncertainty in PET Image Reconstruction via Deep Posterior Sampling
Tin Vlašić, Tomislav Matulić, Damir Seršić
U-PASS: an Uncertainty-guided deep learning Pipeline for Automated Sleep Staging
Elisabeth R. M. Heremans, Nabeel Seedat, Bertien Buyse, Dries Testelmans, Mihaela van der Schaar, Maarten De Vos