Evidential Deep Learning
Evidential Deep Learning (EDL) is a burgeoning field aiming to improve the reliability of deep learning models by explicitly quantifying predictive uncertainty. Current research focuses on refining EDL architectures and algorithms, often leveraging Dirichlet distributions and addressing limitations like overconfidence and sensitivity to conflicting evidence in multi-view or incomplete data scenarios. This work is significant because reliable uncertainty estimation is crucial for deploying deep learning in high-stakes applications such as medical diagnosis and autonomous driving, where understanding model confidence is paramount for safe and trustworthy decision-making. Improved uncertainty quantification through EDL promises to enhance the robustness and trustworthiness of AI systems across various domains.
Papers
Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection
Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung
Accurate Passive Radar via an Uncertainty-Aware Fusion of Wi-Fi Sensing Data
Marco Cominelli, Francesco Gringoli, Lance M. Kaplan, Mani B. Srivastava, Federico Cerutti