Model Uncertainty
Model uncertainty, encompassing both aleatoric (irreducible) and epistemic (reducible) sources of error, is a critical area of research aiming to improve the reliability and trustworthiness of machine learning models. Current efforts focus on quantifying and decomposing these uncertainties across various model architectures, including deep neural networks, boosted trees, and Bayesian methods, often employing techniques like Monte Carlo dropout, ensembles, and conformal prediction. Understanding and effectively managing model uncertainty is crucial for building robust and safe AI systems, particularly in high-stakes applications like healthcare, finance, and climate modeling, where reliable predictions are paramount. This improved understanding facilitates better decision-making under uncertainty and enhances the overall trustworthiness of AI-driven insights.
Papers
Can input reconstruction be used to directly estimate uncertainty of a regression U-Net model? -- Application to proton therapy dose prediction for head and neck cancer patients
Margerie Huet-Dastarac, Dan Nguyen, Steve Jiang, John Lee, Ana Barragan Montero
Model Uncertainty based Active Learning on Tabular Data using Boosted Trees
Sharath M Shankaranarayana