Confidence Aware
Confidence-aware systems aim to improve the reliability and robustness of machine learning models by explicitly quantifying and utilizing prediction confidence. Current research focuses on developing methods to estimate confidence at various granularities (e.g., image-wide, instance-level, category-level), incorporating confidence into model training and decision-making processes (e.g., through confidence-weighted fusion, selective prediction, or dynamic thresholding), and using confidence to improve model calibration and address issues like overconfidence and distribution shift. This work is significant because it enhances the trustworthiness and safety of AI systems across diverse applications, from medical image analysis and robotics to natural language processing and advertising.
Papers
Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models
Loka Li, Zhenhao Chen, Guangyi Chen, Yixuan Zhang, Yusheng Su, Eric Xing, Kun Zhang
BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Emanuele Marconato, Samuele Bortolotti, Emile van Krieken, Antonio Vergari, Andrea Passerini, Stefano Teso