Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Uncertainty Quantification for Image-based Traffic Prediction across Cities
Alexander Timans, Nina Wiedemann, Nishant Kumar, Ye Hong, Martin Raubal
Comparing the quality of neural network uncertainty estimates for classification problems
Daniel Ries, Joshua Michalenko, Tyler Ganter, Rashad Imad-Fayez Baiyasi, Jason Adams
Uncertainty Quantification for Molecular Property Predictions with Graph Neural Architecture Search
Shengli Jiang, Shiyi Qin, Reid C. Van Lehn, Prasanna Balaprakash, Victor M. Zavala
Towards Reliable Rare Category Analysis on Graphs via Individual Calibration
Longfeng Wu, Bowen Lei, Dongkuan Xu, Dawei Zhou
Physics-based Reduced Order Modeling for Uncertainty Quantification of Guided Wave Propagation using Bayesian Optimization
G. I. Drakoulas, T. V. Gortsas, D. Polyzos
Conformal prediction under ambiguous ground truth
David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet