Quantified Uncertainty
Quantified uncertainty focuses on developing methods to estimate and represent the reliability of machine learning model predictions, moving beyond simple point estimates to incorporate uncertainty measures. Current research emphasizes disentangling different types of uncertainty (e.g., aleatoric and epistemic) and tailoring uncertainty estimation techniques to specific tasks, often employing Bayesian methods, neural networks (including iterative architectures and ensembles), and probabilistic programming. This work is crucial for building trustworthy AI systems, improving the reliability of scientific discoveries based on data-driven models, and enabling safe deployment of AI in high-stakes applications like robotics and autonomous systems.
Papers
Latent BKI: Open-Dictionary Continuous Mapping in Visual-Language Latent Spaces with Quantifiable Uncertainty
Joey Wilson, Ruihan Xu, Yile Sun, Parker Ewen, Minghan Zhu, Kira Barton, Maani Ghaffari
Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Nico Wagner, Michael Desmond, Rahul Nair, Zahra Ashktorab, Elizabeth M. Daly, Qian Pan, Martín Santillán Cooper, James M. Johnson, Werner Geyer