Logit Calibration

Logit calibration techniques aim to improve the reliability and interpretability of deep learning models' predictions by adjusting the model's output logits. Current research focuses on applying logit calibration to enhance various aspects of model performance, including knowledge distillation, out-of-distribution detection, adversarial robustness, and addressing issues like overconfidence and imbalanced datasets in tasks ranging from image classification to large language models. These methods often involve simple, post-hoc adjustments to logits, such as scaling or normalization, demonstrating their practical applicability and potential to improve model trustworthiness and generalization across diverse architectures and datasets. The impact of this work extends to improving model accuracy, reliability, and safety in various applications.

Papers