Aleatoric Uncertainty
Aleatoric uncertainty, representing inherent randomness in data, is a critical challenge in various machine learning applications, particularly those demanding high reliability. Current research focuses on accurately quantifying and modeling this uncertainty, often employing techniques like Bayesian neural networks, probabilistic neural networks, and ensemble methods, along with novel approaches such as conformal prediction and heteroscedastic regression. Successfully addressing aleatoric uncertainty is crucial for improving the trustworthiness and safety of AI systems across diverse fields, from autonomous driving and robotics to medical diagnosis and scientific modeling, by enabling more reliable predictions and uncertainty-aware decision-making.
Papers
SPUQ: Perturbation-Based Uncertainty Quantification for Large Language Models
Xiang Gao, Jiaxin Zhang, Lalla Mouatadid, Kamalika Das
From Displacements to Distributions: A Machine-Learning Enabled Framework for Quantifying Uncertainties in Parameters of Computational Models
Taylor Roper, Harri Hakula, Troy Butler
Uncertainty-Aware Prediction and Application in Planning for Autonomous Driving: Definitions, Methods, and Comparison
Wenbo Shao, Jiahui Xu, Zhong Cao, Hong Wang, Jun Li