High Epistemic Uncertainty
High epistemic uncertainty, representing uncertainty reducible with more data or improved models, is a critical focus in machine learning, particularly for high-stakes applications like medical imaging and weather forecasting. Current research emphasizes developing methods to quantify and disentangle epistemic uncertainty from aleatoric uncertainty (inherent randomness), often employing Bayesian neural networks, ensembles of models (including diffusion models), and novel risk-sensitive reinforcement learning algorithms. This work aims to improve model reliability and robustness, leading to more trustworthy predictions and decision-making across diverse fields, including bias mitigation in computer vision and natural language processing.
Papers
October 28, 2024
October 11, 2024
July 8, 2024
February 5, 2024
November 30, 2022
July 2, 2022
June 3, 2022
May 9, 2022
April 20, 2022