Logit Adjustment

Logit adjustment is a technique used to improve the performance of classification models, particularly in scenarios with imbalanced datasets (long-tailed distributions) or when dealing with data heterogeneity across distributed systems (like in federated learning). Current research focuses on developing and refining logit adjustment methods, often incorporating them into broader frameworks that address issues like feature instability, class-wise bias, and the limitations of existing loss functions such as cross-entropy. These advancements aim to enhance model accuracy and robustness, leading to improved performance in various applications, including image classification, semantic segmentation, and zero-shot learning.

Papers