Predictive Uncertainty
Predictive uncertainty, the quantification of a model's confidence in its predictions, is a crucial area of research aiming to improve the reliability and trustworthiness of machine learning models. Current efforts focus on developing methods to accurately estimate and calibrate uncertainty, particularly within specific model architectures like Bayesian neural networks, graph neural networks, and ensembles, and applying these methods to diverse tasks including classification, regression, and time series forecasting. This research is vital for deploying machine learning in high-stakes applications, such as medical diagnosis and autonomous systems, where understanding and managing uncertainty is paramount for safe and effective decision-making.
Papers
A Close Look into the Calibration of Pre-trained Language Models
Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, Heng Ji
Evaluating Point-Prediction Uncertainties in Neural Networks for Drug Discovery
Ya Ju Fan, Jonathan E. Allen, Kevin S. McLoughlin, Da Shi, Brian J. Bennion, Xiaohua Zhang, Felice C. Lightstone