Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers
Diff-DAgger: Uncertainty Estimation with Diffusion Policy for Robotic Manipulation
Sung-Wook Lee, Yen-Ling Kuo
Do LLMs estimate uncertainty well in instruction-following?
Juyeon Heo, Miao Xiong, Christina Heinze-Deml, Jaya Narain
LoGU: Long-form Generation with Uncertainty Expressions
Ruihan Yang, Caiqi Zhang, Zhisong Zhang, Xinting Huang, Sen Yang, Nigel Collier, Dong Yu, Deqing Yang
Hierarchical uncertainty estimation for learning-based registration in neuroimaging
Xiaoling Hu, Karthik Gopinath, Peirong Liu, Malte Hoffmann, Koen Van Leemput, Oula Puonti, Juan Eugenio Iglesias
MMLF: Multi-modal Multi-class Late Fusion for Object Detection with Uncertainty Estimation
Qihang Yang, Yang Zhao, Hong Cheng
Uncertainty Estimation and Out-of-Distribution Detection for LiDAR Scene Semantic Segmentation
Hanieh Shojaei, Qianqian Zou, Max Mehltretter
Edge AI Collaborative Learning: Bayesian Approaches to Uncertainty Estimation
Gleb Radchenko, Victoria Andrea Fill