Evidential Uncertainty
Evidential uncertainty focuses on quantifying and representing uncertainty in machine learning predictions, moving beyond simple probabilities to incorporate the strength of evidence supporting different outcomes. Current research emphasizes developing model architectures, such as evidential deep learning networks and adaptations of conformal prediction, that explicitly model uncertainty using frameworks like belief functions and the Dirichlet distribution. This allows for more reliable predictions, particularly crucial in high-stakes applications like medical diagnosis and autonomous systems, by providing not just a prediction but also a measure of confidence in that prediction. The resulting improved reliability and interpretability of models are driving significant advancements across various fields.