High Uncertainty Anticipation
High uncertainty anticipation focuses on developing methods to accurately quantify and manage uncertainty in model predictions across diverse fields, aiming to improve the reliability and trustworthiness of AI systems. Current research emphasizes integrating uncertainty estimation into various model architectures, including neural networks, diffusion models, and graph neural networks, often employing techniques like Bayesian methods, conformal prediction, and ensemble methods. This work is crucial for deploying AI in high-stakes applications like healthcare, autonomous driving, and finance, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
Evaluating Latent Space Robustness and Uncertainty of EEG-ML Models under Realistic Distribution Shifts
Neeraj Wagh, Jionghao Wei, Samarth Rawal, Brent M. Berry, Yogatheesan Varatharajah
Characterizing Uncertainty in the Visual Text Analysis Pipeline
Pantea Haghighatkhah, Mennatallah El-Assady, Jean-Daniel Fekete, Narges Mahyar, Carita Paradis, Vasiliki Simaki, Bettina Speckmann
Interpretable Uncertainty Quantification in AI for HEP
Thomas Y. Chen, Biprateep Dey, Aishik Ghosh, Michael Kagan, Brian Nord, Nesar Ramachandra
Leveraging Distributional Bias for Reactive Collision Avoidance under Uncertainty: A Kernel Embedding Approach
Anish Gupta, Arun Kumar Singh, K. Madhava Krishna