High Uncertainty Anticipation
High uncertainty anticipation focuses on developing methods to accurately quantify and manage uncertainty in model predictions across diverse fields, aiming to improve the reliability and trustworthiness of AI systems. Current research emphasizes integrating uncertainty estimation into various model architectures, including neural networks, diffusion models, and graph neural networks, often employing techniques like Bayesian methods, conformal prediction, and ensemble methods. This work is crucial for deploying AI in high-stakes applications like healthcare, autonomous driving, and finance, where reliable uncertainty quantification is paramount for safe and effective decision-making.
Papers
URCDC-Depth: Uncertainty Rectified Cross-Distillation with CutFlip for Monocular Depth Estimation
Shuwei Shao, Zhongcai Pei, Weihai Chen, Ran Li, Zhong Liu, Zhengguo Li
Group Fairness with Uncertainty in Sensitive Attributes
Abhin Shah, Maohao Shen, Jongha Jon Ryu, Subhro Das, Prasanna Sattigeri, Yuheng Bu, Gregory W. Wornell
Individual Fairness under Uncertainty
Wenbin Zhang, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar, Jeremy Weiss