Uncertain Reasoning

Uncertain reasoning focuses on developing methods for making decisions and predictions in the presence of incomplete or unreliable information. Current research emphasizes robust model architectures, such as Bayesian networks, Gaussian processes, and deep learning models incorporating aleatoric and epistemic uncertainty, to quantify and manage this uncertainty. These advancements are crucial for improving the reliability and trustworthiness of AI systems across diverse applications, from autonomous driving and robotics to medical diagnosis and scientific modeling, where uncertainty is inherent. The field is actively exploring efficient algorithms and novel metrics to evaluate performance in uncertain environments, particularly focusing on the trade-off between performance and reproducibility.

Papers