Unified Uncertainty

Unified uncertainty research aims to comprehensively quantify and integrate various sources of uncertainty—including input data, model parameters, and inherent model limitations—into a single, coherent representation for improved prediction reliability and explainability. Current efforts focus on developing methods to propagate input uncertainty through models (e.g., neural networks, Bayesian frameworks, and cognitive diagnosis models), often decomposing uncertainty into distinct components for better interpretability and using techniques like Kalman filtering and Gaussian processes to estimate and manage uncertainty. This work is crucial for building more robust and trustworthy AI systems across diverse applications, from personalized education to human activity recognition and high-stakes decision-making, by providing a more complete picture of prediction confidence and facilitating better understanding of model behavior.

Papers