High Uncertainty Anticipation
High uncertainty anticipation focuses on developing methods to accurately quantify and manage uncertainty in model predictions across diverse fields, aiming to improve the reliability and trustworthiness of AI systems. Current research emphasizes integrating uncertainty estimation into various model architectures, including neural networks, diffusion models, and graph neural networks, often employing techniques like Bayesian methods, conformal prediction, and ensemble methods. This work is crucial for deploying AI in high-stakes applications like healthcare, autonomous driving, and finance, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
Batch Selection for Multi-Label Classification Guided by Uncertainty and Dynamic Label Correlations
Ao Zhou, Bin Liu, Jin Wang, Grigorios Tsoumakas
Condensed Stein Variational Gradient Descent for Uncertainty Quantification of Neural Networks
Govinda Anantha Padmanabha, Cosmin Safta, Nikolaos Bouklas, Reese E. Jones
Training Data Reconstruction: Privacy due to Uncertainty?
Christina Runkel, Kanchana Vaishnavi Gandikota, Jonas Geiping, Carola-Bibiane Schönlieb, Michael Moeller
Improving Active Learning with a Bayesian Representation of Epistemic Uncertainty
Jake Thomas, Jeremie Houssineau
CUPS: Improving Human Pose-Shape Estimators with Conformalized Deep Uncertainty
Harry Zhang, Luca Carlone
Quantum-Cognitive Neural Networks: Assessing Confidence and Uncertainty with Human Decision-Making Simulations
Milan Maksimovic, Ivan S. Maksymov
Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty
Meera Hahn, Wenjun Zeng, Nithish Kannen, Rich Galt, Kartikeya Badola, Been Kim, Zi Wang
I Don't Know: Explicit Modeling of Uncertainty with an [IDK] Token
Roi Cohen, Konstantin Dobler, Eden Biran, Gerard de Melo
From Uncertainty to Trust: Enhancing Reliability in Vision-Language Models with Uncertainty-Guided Dropout Decoding
Yixiong Fang, Ziran Yang, Zhaorun Chen, Zhuokai Zhao, Jiawei Zhou