Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Photoelectric Factor Prediction Using Automated Learning and Uncertainty Quantification
Khalid L. Alsamadony, Ahmed Farid Ibrahim, Salaheldin Elkatatny, Abdulazeez Abdulraheem
Uncertainty-aware Evaluation of Time-Series Classification for Online Handwriting Recognition with Domain Shift
Andreas Klaß, Sven M. Lorenz, Martin W. Lauer-Schmaltz, David Rügamer, Bernd Bischl, Christopher Mutschler, Felix Ott
Uncertainty Quantification for Fairness in Two-Stage Recommender Systems
Lequn Wang, Thorsten Joachims
Uncertainty Quantification and Resource-Demanding Computer Vision Applications of Deep Learning
Julian Burghoff, Robin Chan, Hanno Gottschalk, Annika Muetze, Tobias Riedlinger, Matthias Rottmann, Marius Schubert