Uncertainty Aware
Uncertainty-aware methods aim to improve the reliability and robustness of machine learning models and data analysis by explicitly quantifying and incorporating uncertainty into predictions and visualizations. Current research focuses on integrating uncertainty quantification techniques, such as Bayesian methods, ensemble models, and probabilistic neural networks, into diverse applications, including robotics, medical imaging, and scientific data analysis. This focus on uncertainty awareness is crucial for building trustworthy AI systems and enhancing the interpretability and reliability of scientific findings across various domains. The resulting advancements are leading to more robust and reliable decision-making in high-stakes applications.
Papers
Fine-Tuned Convex Approximations of Probabilistic Reachable Sets under Data-driven Uncertainties
Pengcheng Wu, Sonia Martinez, Jun Chen
NeU-NBV: Next Best View Planning Using Uncertainty Estimation in Image-Based Neural Rendering
Liren Jin, Xieyuanli Chen, Julius Rückin, Marija Popović
Hallucinated Adversarial Control for Conservative Offline Policy Evaluation
Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie, Andreas Krause