Ensemble Uncertainty
Ensemble uncertainty quantifies the reliability of predictions from multiple deep learning models, aiming to improve the robustness and safety of AI systems. Current research focuses on leveraging ensemble methods, including Monte Carlo Dropout and variations employing diverse priors or random activation functions, to estimate uncertainty in diverse applications like reinforcement learning, image processing, and physical system modeling. This work is crucial for building trust in AI systems by providing a measure of confidence in their predictions, leading to safer and more reliable applications across various fields.
Papers
October 27, 2024
March 9, 2024
October 23, 2023
August 20, 2023
April 4, 2023
February 28, 2023
February 20, 2023
August 19, 2022