Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
The Implicit Delta Method
Nathan Kallus, James McInerney
Disentangled Uncertainty and Out of Distribution Detection in Medical Generative Models
Kumud Lakara, Matias Valdenegro-Toro
Comparison of Uncertainty Quantification with Deep Learning in Time Series Regression
Levente Foldesi, Matias Valdenegro-Toro
Homodyned K-distribution: parameter estimation and uncertainty quantification using Bayesian neural networks
Ali K. Z. Tehrani, Ivan M. Rosado-Mendez, Hassan Rivaz
Evaluating Point-Prediction Uncertainties in Neural Networks for Drug Discovery
Ya Ju Fan, Jonathan E. Allen, Kevin S. McLoughlin, Da Shi, Brian J. Bennion, Xiaohua Zhang, Felice C. Lightstone