Neural Network Calibration

Neural network calibration aims to improve the reliability of deep learning models by aligning their predicted confidence scores with their actual accuracy. Current research focuses on developing methods to improve calibration across various model architectures, including convolutional neural networks, transformers, and graph neural networks, often employing techniques like differentiable loss functions and variational inference. This is crucial for deploying these models in safety-critical applications like autonomous driving and healthcare, where accurate uncertainty quantification is paramount, and for improving the interpretability and trustworthiness of AI systems in general. The field is actively exploring both train-time and post-hoc calibration methods, and addressing challenges such as domain shift and open-set recognition.

Papers