Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers
Uncertainty estimation in spatial interpolation of satellite precipitation with ensemble learning
Georgia Papacharalampous, Hristos Tyralis, Nikolaos Doulamis, Anastasios Doulamis
Uncertainty Estimation in Multi-Agent Distributed Learning for AI-Enabled Edge Devices
Gleb Radchenko, Victoria Andrea Fill
HR-APR: APR-agnostic Framework with Uncertainty Estimation and Hierarchical Refinement for Camera Relocalisation
Changkun Liu, Shuai Chen, Yukun Zhao, Huajian Huang, Victor Prisacariu, Tristan Braud
Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond
Zhiyuan Wang, Jinhao Duan, Chenxi Yuan, Qingyu Chen, Tianlong Chen, Huaxiu Yao, Yue Zhang, Ren Wang, Kaidi Xu, Xiaoshuang Shi
Scalable and reliable deep transfer learning for intelligent fault detection via multi-scale neural processes embedded with knowledge
Zhongzhi Li, Jingqi Tu, Jiacheng Zhu, Jianliang Ai, Yiqun Dong
Discriminant Distance-Aware Representation on Deterministic Uncertainty Quantification Methods
Jiaxin Zhang, Kamalika Das, Sricharan Kumar