Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Semantic Density: Uncertainty Quantification for Large Language Models through Confidence Measurement in Semantic Space
Xin Qiu, Risto Miikkulainen
Efficient Two-Stage Gaussian Process Regression Via Automatic Kernel Search and Subsampling
Shifan Zhao, Jiaying Lu, Ji Yang (Carl), Edmond Chow, Yuanzhe Xi
Stochastic Inference of Plate Bending from Heterogeneous Data: Physics-informed Gaussian Processes via Kirchhoff-Love Theory
Igor Kavrakov, Gledson Rodrigo Tondo, Guido Morgenthal
Uncertainty quantification by block bootstrap for differentially private stochastic gradient descent
Holger Dette, Carina Graw