High Uncertainty Anticipation
High uncertainty anticipation focuses on developing methods to accurately quantify and manage uncertainty in model predictions across diverse fields, aiming to improve the reliability and trustworthiness of AI systems. Current research emphasizes integrating uncertainty estimation into various model architectures, including neural networks, diffusion models, and graph neural networks, often employing techniques like Bayesian methods, conformal prediction, and ensemble methods. This work is crucial for deploying AI in high-stakes applications like healthcare, autonomous driving, and finance, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
Safe multi-agent motion planning under uncertainty for drones using filtered reinforcement learning
Sleiman Safaoui, Abraham P. Vinod, Ankush Chakrabarty, Rien Quirynen, Nobuyuki Yoshikawa, Stefano Di Cairano
Accelerating Generalized Linear Models by Trading off Computation for Uncertainty
Lukas Tatzel, Jonathan Wenger, Frank Schneider, Philipp Hennig
A Deep Learning Analysis of Climate Change, Innovation, and Uncertainty
Michael Barnett, William Brock, Lars Peter Hansen, Ruimeng Hu, Joseph Huang
Knowledge from Uncertainty in Evidential Deep Learning
Cai Davies, Marc Roig Vilamala, Alun D. Preece, Federico Cerutti, Lance M. Kaplan, Supriyo Chakraborty
Reducing Uncertainty in Sea-level Rise Prediction: A Spatial-variability-aware Approach
Subhankar Ghosh, Shuai An, Arun Sharma, Jayant Gupta, Shashi Shekhar, Aneesh Subramanian