Uncertainty Aware
Uncertainty-aware methods aim to improve the reliability and robustness of machine learning models and data analysis by explicitly quantifying and incorporating uncertainty into predictions and visualizations. Current research focuses on integrating uncertainty quantification techniques, such as Bayesian methods, ensemble models, and probabilistic neural networks, into diverse applications, including robotics, medical imaging, and scientific data analysis. This focus on uncertainty awareness is crucial for building trustworthy AI systems and enhancing the interpretability and reliability of scientific findings across various domains. The resulting advancements are leading to more robust and reliable decision-making in high-stakes applications.
Papers
Probabilistic 3D segmentation for aleatoric uncertainty quantification in full 3D medical data
Christiaan G. A. Viviers, Amaan M. M. Valiuddin, Peter H. N. de With, Fons van der Sommen
Learning Flight Control Systems from Human Demonstrations and Real-Time Uncertainty-Informed Interventions
Prashant Ganesh, J. Humberto Ramos, Vinicius G. Goecks, Jared Paquet, Matthew Longmire, Nicholas R. Waytowich, Kevin Brink