Confidence Aware

Confidence-aware systems aim to improve the reliability and robustness of machine learning models by explicitly quantifying and utilizing prediction confidence. Current research focuses on developing methods to estimate confidence at various granularities (e.g., image-wide, instance-level, category-level), incorporating confidence into model training and decision-making processes (e.g., through confidence-weighted fusion, selective prediction, or dynamic thresholding), and using confidence to improve model calibration and address issues like overconfidence and distribution shift. This work is significant because it enhances the trustworthiness and safety of AI systems across diverse applications, from medical image analysis and robotics to natural language processing and advertising.

Papers