Logit Adjustment
Logit adjustment is a technique used to improve the performance of classification models, particularly in scenarios with imbalanced datasets (long-tailed distributions) or when dealing with data heterogeneity across distributed systems (like in federated learning). Current research focuses on developing and refining logit adjustment methods, often incorporating them into broader frameworks that address issues like feature instability, class-wise bias, and the limitations of existing loss functions such as cross-entropy. These advancements aim to enhance model accuracy and robustness, leading to improved performance in various applications, including image classification, semantic segmentation, and zero-shot learning.
Papers
October 25, 2024
September 26, 2024
September 18, 2024
September 9, 2024
August 19, 2024
May 8, 2024
March 19, 2024
October 12, 2023
June 3, 2023
May 19, 2023
March 14, 2023