Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning
Frederik Hoppe, Claudio Mayrink Verdun, Hannah Laus, Felix Krahmer, Holger Rauhut
With or Without Replacement? Improving Confidence in Fourier Imaging
Frederik Hoppe, Claudio Mayrink Verdun, Felix Krahmer, Marion I. Menzel, Holger Rauhut
Lightweight Uncertainty Quantification with Simplex Semantic Segmentation for Terrain Traversability
Judith Dijk, Gertjan Burghouts, Kapil D. Katyal, Bryanna Y. Yeh, Craig T. Knuth, Ella Fokkinga, Tejaswi Kasarla, Pascal Mettes
Interpretability of Uncertainty: Exploring Cortical Lesion Segmentation in Multiple Sclerosis
Nataliia Molchanova, Alessandro Cagol, Pedro M. Gordaliza, Mario Ocampo-Pineda, Po-Jui Lu, Matthias Weigel, Xinjie Chen, Adrien Depeursinge, Cristina Granziera, Henning Müller, Meritxell Bach Cuadra
Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in Transonic Aerodynamic Loads
Andrea Vaiuso, Gabriele Immordino, Marcello Righi, Andrea Da Ronch
Uncertainty Quantification in Table Structure Recognition
Kehinde Ajayi, Leizhen Zhang, Yi He, Jian Wu
Are you sure? Analysing Uncertainty Quantification Approaches for Real-world Speech Emotion Recognition
Oliver Schrüfer, Manuel Milling, Felix Burkhardt, Florian Eyben, Björn Schuller
Bayesian Entropy Neural Networks for Physics-Aware Prediction
Rahul Rathnakumar, Jiayu Huang, Hao Yan, Yongming Liu
Improving the performance of Stein variational inference through extreme sparsification of physically-constrained neural network models
Govinda Anantha Padmanabha, Jan Niklas Fuhg, Cosmin Safta, Reese E. Jones, Nikolaos Bouklas
DADEE: Well-calibrated uncertainty quantification in neural networks for barriers-based robot safety
Masoud Ataei, Vikas Dhiman