Deterministic Uncertainty
Deterministic uncertainty methods (DUMs) aim to efficiently estimate uncertainty in deep learning models without the computational cost of traditional Bayesian approaches. Current research focuses on improving DUM calibration and accuracy through architectural innovations, such as incorporating prototype-based representations and decoupling training schemes for the core model and uncertainty estimation components. These advancements address limitations like feature collapse and improve performance on various tasks, including medical image analysis and autonomous driving perception. The resulting reliable uncertainty quantification is crucial for deploying trustworthy AI systems in high-stakes applications.
Papers
May 10, 2024
February 20, 2024
March 10, 2023
July 20, 2022