Uncertainty Estimation
Uncertainty estimation in machine learning aims to quantify the reliability of model predictions, addressing the critical need for trustworthy AI systems. Current research focuses on improving uncertainty quantification across diverse model architectures, including Bayesian neural networks, ensembles, and novel methods like evidential deep learning and conformal prediction, often tailored to specific application domains (e.g., medical imaging, natural language processing). Accurate uncertainty estimation is crucial for responsible AI deployment, enabling better decision-making in high-stakes applications and fostering greater trust in AI-driven outcomes across various scientific and practical fields. This includes identifying unreliable predictions, improving model calibration, and mitigating issues like hallucinations in large language models.
Papers
On the Calibration and Uncertainty with P\'{o}lya-Gamma Augmentation for Dialog Retrieval Models
Tong Ye, Shijing Si, Jianzong Wang, Ning Cheng, Zhitao Li, Jing Xiao
On the uncertainty analysis of the data-enabled physics-informed neural network for solving neutron diffusion eigenvalue problem
Yu Yang, Helin Gong, Qihong Yang, Yangtao Deng, Qiaolin He, Shiquan Zhang
Window-Based Early-Exit Cascades for Uncertainty Estimation: When Deep Ensembles are More Efficient than Single Models
Guoxuan Xia, Christos-Savvas Bouganis
On the Connection between Concept Drift and Uncertainty in Industrial Artificial Intelligence
Jesus L. Lobo, Ibai Laña, Eneko Osaba, Javier Del Ser