Uncertainty Quantification
Uncertainty quantification (UQ) aims to assess and represent the confidence in predictions made by machine learning models, crucial for high-stakes applications where reliable predictions are paramount. Current research focuses on developing robust UQ methods, particularly addressing biases in predictions and efficiently quantifying uncertainty in large language models and deep neural networks, often employing techniques like conformal prediction, Bayesian methods, and ensemble learning. The ability to reliably quantify uncertainty enhances the trustworthiness and applicability of machine learning across diverse fields, from healthcare diagnostics and autonomous driving to climate modeling and drug discovery.
Papers
Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models
Yuanzhe Wang, Alexandre M. Tartakovsky
Conformalized Interval Arithmetic with Symmetric Calibration
Rui Luo, Zhixin Zhou
Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models
Artem Vazhentsev, Ekaterina Fadeeva, Rui Xing, Alexander Panchenko, Preslav Nakov, Timothy Baldwin, Maxim Panov, Artem Shelmanov
Quantification of total uncertainty in the physics-informed reconstruction of CVSim-6 physiology
Mario De Florio, Zongren Zou, Daniele E. Schiavazzi, George Em Karniadakis
MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data Uncertainty
Yongjin Yang, Haneul Yoo, Hwaran Lee
Uncertainty Quantification in Alzheimer's Disease Progression Modeling
Wael Mobeirek, Shirley Mao