Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers - Page 5
Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models
Qingcheng Zeng, Mingyu Jin, Qinkai Yu, Zhenting Wang, Wenyue Hua, Zihao Zhou, Guangyan Sun, Yanda Meng, Shiqing Ma, Qifan Wang+5Expert-aware uncertainty estimation for quality control of neural-based blood typing
Ekaterina Zaychenkova, Dmitrii Iarchuk, Sergey Korchagin, Alexey Zaitsev, Egor ErshovImproved Uncertainty Estimation of Graph Neural Network Potentials Using Engineered Latent Space Distances
Joseph Musielewicz, Janice Lan, Matt Uyttendaele, John R. Kitchin
Privacy Preserving Federated Learning in Medical Imaging with Uncertainty Estimation
Nikolas Koutsoubis, Yasin Yilmaz, Ravi P. Ramachandran, Matthew Schabath, Ghulam RasoolEnhancing Diagnostic Reliability of Foundation Model with Uncertainty Estimation in OCT Images
Yuanyuan Peng, Aidi Lin, Meng Wang, Tian Lin, Ke Zou, Yinglin Cheng, Tingkun Shi, Xulong Liao, Lixia Feng, Zhen Liang, Xinjian Chen, Huazhu Fu+1