Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Counterfactual Uncertainty Quantification of Factual Estimand of Efficacy from Before-and-After Treatment Repeated Measures Randomized Controlled Trials
Xingya Wang, Yang Han, Yushi Liu, Szu-Yu Tang, Jason C. Hsu
Inherently Interpretable and Uncertainty-Aware Models for Online Learning in Cyber-Security Problems
Benjamin Kolicic, Alberto Caron, Chris Hicks, Vasilios Mavroudis
Addressing Uncertainty in LLMs to Enhance Reliability in Generative AI
Ramneet Kaur, Colin Samplawski, Adam D. Cobb, Anirban Roy, Brian Matejek, Manoj Acharya, Daniel Elenius, Alexander M. Berenbeim, John A. Pavlik, Nathaniel D. Bastian, Susmit Jha
Targeted Learning for Variable Importance
Xiaohan Wang, Yunzhe Zhou, Giles Hooker
Multi-fidelity Machine Learning for Uncertainty Quantification and Optimization
Ruda Zhang, Negin Alemazkoor
Uncertainty quantification for fast reconstruction methods using augmented equivariant bootstrap: Application to radio interferometry
Mostafa Cherif, Tobías I. Liaudat, Jonathan Kern, Christophe Kervazo, Jérôme Bobin
Legitimate ground-truth-free metrics for deep uncertainty classification scoring
Arthur Pignet, Chiara Regniez, John Klein
Improving Uncertainty Quantification in Large Language Models via Semantic Embeddings
Yashvir S. Grewal, Edwin V. Bonilla, Thang D. Bui