Uncertainty Estimation Task
Uncertainty estimation aims to quantify the reliability of predictions made by machine learning models, addressing issues like overconfidence and a lack of explainability. Current research focuses on developing methods for various model types, including large language models (LLMs) and deep neural networks used in image processing and other tasks, employing techniques such as conformal prediction, Bayesian meta-models, and ensemble methods to improve uncertainty quantification. This work is crucial for building trustworthy AI systems, enabling more reliable decision-making in high-stakes applications like medical diagnosis and autonomous driving, and fostering greater transparency in AI's predictions.
Papers
October 20, 2024
October 3, 2024
June 17, 2024
May 28, 2024
May 27, 2024
April 24, 2024
March 29, 2024
February 16, 2024
March 3, 2023
December 14, 2022
July 26, 2022
February 7, 2022