Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Empirical evaluation of Uncertainty Quantification in Retrieval-Augmented Language Models for Science
Sridevi Wagle, Sai Munikoti, Anurag Acharya, Sara Smith, Sameera Horawalavithana
Uncertainty Quantification in Machine Learning for Biosignal Applications -- A Review
Ivo Pascal de Jong, Andreea Ioana Sburlea, Matias Valdenegro-Toro
Structural-Based Uncertainty in Deep Learning Across Anatomical Scales: Analysis in White Matter Lesion Segmentation
Nataliia Molchanova, Vatsal Raina, Andrey Malinin, Francesco La Rosa, Adrien Depeursinge, Mark Gales, Cristina Granziera, Henning Muller, Mara Graziani, Meritxell Bach Cuadra
Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks
Longze Li, Jiang Chang, Aleksandar Vakanski, Yachun Wang, Tiankai Yao, Min Xian
Uncertainty Quantification of Deep Learning for Spatiotemporal Data: Challenges and Opportunities
Wenchong He, Zhe Jiang