Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Uncertainty Quantification for Forward and Inverse Problems of PDEs via Latent Global Evolution
Tailin Wu, Willie Neiswanger, Hongtao Zheng, Stefano Ermon, Jure Leskovec
Uncertainty Quantification via Stable Distribution Propagation
Felix Petersen, Aashwin Mishra, Hilde Kuehne, Christian Borgelt, Oliver Deussen, Mikhail Yurochkin
Neural machine translation of clinical procedure codes for medical diagnosis and uncertainty quantification
Pei-Hung Chung, Shuhan He, Norawit Kijpaisalratana, Abdel-badih el Ariss, Byung-Jun Yoon
Reconfidencing LLMs from the Grouping Loss Perspective
Lihu Chen, Alexandre Perez-Lebel, Fabian M. Suchanek, Gaël Varoquaux
Calibrated Uncertainty Quantification for Operator Learning via Conformal Prediction
Ziqi Ma, Kamyar Azizzadenesheli, Anima Anandkumar
Neural variational Data Assimilation with Uncertainty Quantification using SPDE priors
Maxime Beauchamp, Ronan Fablet, Simon Benaichouche, Pierre Tandeo, Nicolas Desassis, Bertrand Chapron
LTAU-FF: Loss Trajectory Analysis for Uncertainty in Atomistic Force Fields
Joshua A. Vita, Amit Samanta, Fei Zhou, Vincenzo Lordi
Analog In-Memory Computing with Uncertainty Quantification for Efficient Edge-based Medical Imaging Segmentation
Imane Hamzaoui, Hadjer Benmeziane, Zayneb Cherif, Kaoutar El Maghraoui