Ensemble Uncertainty

Ensemble uncertainty quantifies the reliability of predictions from multiple deep learning models, aiming to improve the robustness and safety of AI systems. Current research focuses on leveraging ensemble methods, including Monte Carlo Dropout and variations employing diverse priors or random activation functions, to estimate uncertainty in diverse applications like reinforcement learning, image processing, and physical system modeling. This work is crucial for building trust in AI systems by providing a measure of confidence in their predictions, leading to safer and more reliable applications across various fields.

Papers