Reliable Uncertainty
Reliable uncertainty quantification in machine learning aims to provide trustworthy estimates of prediction confidence, crucial for deploying models in high-stakes applications like autonomous driving and medical diagnosis. Current research focuses on developing methods that accurately capture both aleatoric (data-inherent) and epistemic (model-related) uncertainty, employing techniques like Bayesian neural networks, deep ensembles, evidential deep learning, and conformal prediction, often within specific model architectures such as DeepONets. These advancements are vital for improving the safety and reliability of AI systems across diverse scientific fields and practical applications, enabling more informed decision-making in situations where prediction uncertainty is critical.
Papers
Training-Free Bayesianization for Low-Rank Adapters of Large Language Models
Haizhou Shi, Yibin Wang, Ligong Han, Huan Zhang, Hao Wang
AI-powered Digital Twin of the Ocean: Reliable Uncertainty Quantification for Real-time Wave Height Prediction with Deep Ensemble
Dongeon Lee, Sunwoong Yang, Jae-Won Oh, Su-Gil Cho, Sanghyuk Kim, Namwoo Kang