Aleatoric Uncertainty
Aleatoric uncertainty, representing inherent randomness in data, is a critical challenge in various machine learning applications, particularly those demanding high reliability. Current research focuses on accurately quantifying and modeling this uncertainty, often employing techniques like Bayesian neural networks, probabilistic neural networks, and ensemble methods, along with novel approaches such as conformal prediction and heteroscedastic regression. Successfully addressing aleatoric uncertainty is crucial for improving the trustworthiness and safety of AI systems across diverse fields, from autonomous driving and robotics to medical diagnosis and scientific modeling, by enabling more reliable predictions and uncertainty-aware decision-making.
Papers
Evaluating deep learning models for fault diagnosis of a rotating machinery with epistemic and aleatoric uncertainty
Reza Jalayer, Masoud Jalayer, Andrea Mor, Carlotta Orsenigo, Carlo Vercellis
Provable Uncertainty Decomposition via Higher-Order Calibration
Gustaf Ahdritz, Aravind Gollakota, Parikshit Gopalan, Charlotte Peale, Udi Wieder
Bayesian optimized deep ensemble for uncertainty quantification of deep neural networks: a system safety case study on sodium fast reactor thermal stratification modeling
Zaid Abulawi, Rui Hu, Prasanna Balaprakash, Yang Liu
Improving Active Learning with a Bayesian Representation of Epistemic Uncertainty
Jake Thomas, Jeremie Houssineau