Adaptive Uncertainty
Adaptive uncertainty quantification aims to provide more reliable and context-sensitive uncertainty estimates for machine learning models, particularly in complex scenarios like generative AI and sequential decision-making. Current research focuses on developing methods that dynamically adjust uncertainty measures based on local data characteristics, employing techniques such as conformal prediction, Gaussian process regression, and likelihood ratio-based confidence sets. This improved uncertainty quantification is crucial for enhancing the trustworthiness and robustness of AI systems across diverse applications, from autonomous vehicles to medical diagnosis, by providing more nuanced assessments of model confidence.
Papers
August 16, 2024
June 17, 2024
November 8, 2023
October 25, 2023
August 17, 2023