Evidential Model

Evidential models are a class of machine learning models designed to explicitly quantify uncertainty in predictions, going beyond simple point estimates by providing probability distributions or belief functions that represent both aleatoric (data-inherent) and epistemic (model-related) uncertainty. Current research focuses on applying these models to diverse tasks, including time-to-event prediction, domain adaptation, and multi-agent decision-making, often employing deep learning architectures and algorithms like those based on belief functions and Dirichlet distributions. The ability to rigorously represent and manage uncertainty makes evidential models valuable for improving the reliability and trustworthiness of AI systems in safety-critical applications such as autonomous driving and medical diagnosis.

Papers