Label Distribution Shift
Label distribution shift (LDS) describes the problem where the proportions of different classes in a machine learning model's training data differ significantly from those in its testing data. Current research focuses on developing methods to adapt models to these shifts, including techniques like test-time adaptation (TTA) that recalibrate models during inference and algorithms that estimate the target label distribution to improve predictions. Addressing LDS is crucial for building robust and reliable machine learning systems, particularly in applications like medical diagnosis where class prevalence varies across populations or time, improving the generalizability and trustworthiness of AI models in real-world settings.
Papers
November 4, 2024
October 4, 2024
February 7, 2024
December 14, 2023
October 3, 2023
August 17, 2023
July 31, 2023
June 7, 2023
February 6, 2023
November 30, 2022
September 18, 2022
August 13, 2022
July 5, 2022
July 2, 2022