Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Learning Cellular Network Connection Quality with Conformal
Hanyang Jiang, Elizabeth Belding, Ellen Zegure, Yao Xie
To Believe or Not to Believe Your LLM
Yasin Abbasi Yadkori, Ilja Kuzborskij, András György, Csaba Szepesvári
Label-wise Aleatoric and Epistemic Uncertainty Quantification
Yusuf Sale, Paul Hofman, Timo Löhr, Lisa Wimmer, Thomas Nagler, Eyke Hüllermeier
The Deep Latent Space Particle Filter for Real-Time Data Assimilation with Uncertainty Quantification
Nikolaj T. Mücke, Sander M. Bohté, Cornelis W. Oosterlee
A Bayesian Approach to Online Planning
Nir Greshler, David Ben Eli, Carmel Rabinovitz, Gabi Guetta, Liran Gispan, Guy Zohar, Aviv Tamar
Streamflow Prediction with Uncertainty Quantification for Water Management: A Constrained Reasoning and Learning Approach
Mohammed Amine Gharsallaoui, Bhupinderjeet Singh, Supriya Savalkar, Aryan Deshwal, Yan Yan, Ananth Kalyanaraman, Kirti Rajagopalan, Janardhan Rao Doppa
Uncertainty Quantification for Bird's Eye View Semantic Segmentation: Methods and Benchmarks
Linlin Yu, Bowen Yang, Tianhao Wang, Kangshuo Li, Feng Chen
VENI, VINDy, VICI: a variational reduced-order modeling framework with uncertainty quantification
Paolo Conti, Jonas Kneifl, Andrea Manzoni, Attilio Frangi, Jörg Fehr, Steven L. Brunton, J. Nathan Kutz
Uncertainty Quantification for Deep Learning
Peter Jan van Leeuwen, J. Christine Chiu, C. Kevin Yang
Task-Driven Uncertainty Quantification in Inverse Problems via Conformal Prediction
Jeffrey Wen, Rizwan Ahmad, Philip Schniter
SEMF: Supervised Expectation-Maximization Framework for Predicting Intervals
Ilia Azizi, Marc-Olivier Boldi, Valérie Chavez-Demoulin
BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification
Yen-An Lu, Wei-Shou Hu, Joel A. Paulson, Qi Zhang
Enhancing Global Sensitivity and Uncertainty Quantification in Medical Image Reconstruction with Monte Carlo Arbitrary-Masked Mamba
Jiahao Huang, Liutao Yang, Fanwen Wang, Yang Nan, Weiwen Wu, Chengyan Wang, Kuangyu Shi, Angelica I. Aviles-Rivero, Carola-Bibiane Schönlieb, Daoqiang Zhang, Guang Yang
Evaluation of Multi-task Uncertainties in Joint Semantic Segmentation and Monocular Depth Estimation
Steven Landgraf, Markus Hillemann, Theodor Kapler, Markus Ulrich
Uncertainty Quantification for Neurosymbolic Programs via Compositional Conformal Prediction
Ramya Ramalingam, Sangdon Park, Osbert Bastani
Score-based generative models are provably robust: an uncertainty quantification perspective
Nikiforos Mimikos-Stamatopoulos, Benjamin J. Zhang, Markos A. Katsoulakis