Uncertainty Metric
Uncertainty metrics quantify the reliability of predictions made by machine learning models, particularly in scenarios demanding high confidence, such as robotics and medical diagnosis. Current research focuses on developing and benchmarking these metrics across various model architectures, including large language models and deep neural networks, with a particular emphasis on distinguishing between different types of uncertainty (e.g., aleatoric and epistemic) and their impact on model performance. Improved uncertainty quantification is crucial for enhancing the trustworthiness and safety of AI systems, enabling more reliable decision-making in diverse applications and fostering more reproducible scientific workflows.
Papers
October 1, 2024
September 16, 2024
June 20, 2024
May 7, 2024
April 16, 2024
February 5, 2024
August 18, 2023
July 31, 2023
July 5, 2023
January 13, 2023
September 27, 2022
August 22, 2022
July 31, 2022
May 6, 2022