High Uncertainty Anticipation
High uncertainty anticipation focuses on developing methods to accurately quantify and manage uncertainty in model predictions across diverse fields, aiming to improve the reliability and trustworthiness of AI systems. Current research emphasizes integrating uncertainty estimation into various model architectures, including neural networks, diffusion models, and graph neural networks, often employing techniques like Bayesian methods, conformal prediction, and ensemble methods. This work is crucial for deploying AI in high-stakes applications like healthcare, autonomous driving, and finance, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
Investigating the Impact of Model Instability on Explanations and Uncertainty
Sara Vera Marjanović, Isabelle Augenstein, Christina Lioma
DiffusionNOCS: Managing Symmetry and Uncertainty in Sim2Real Multi-Modal Category-level Pose Estimation
Takuya Ikeda, Sergey Zakharov, Tianyi Ko, Muhammad Zubair Irshad, Robert Lee, Katherine Liu, Rares Ambrus, Koichi Nishiwaki
Deal, or no deal (or who knows)? Forecasting Uncertainty in Conversations using Large Language Models
Anthony Sicilia, Hyunwoo Kim, Khyathi Raghavi Chandu, Malihe Alikhani, Jack Hessel
Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models
Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, Bryan Hooi