Bias Label
Bias in labeled data significantly impacts the fairness and accuracy of machine learning models, leading to research focused on mitigating this issue. Current efforts concentrate on developing algorithms and model architectures that identify and correct for bias, often employing techniques like pseudo-labeling refinement, logit adjustments, and contrastive learning, sometimes without requiring explicit bias annotations. This research is crucial for ensuring the reliability and ethical deployment of machine learning systems across various applications, particularly in sensitive domains like healthcare and criminal justice, where biased models can have severe real-world consequences.
Papers
November 4, 2024
October 21, 2024
September 29, 2024
June 27, 2024
June 19, 2024
May 29, 2024
March 28, 2024
March 15, 2024
January 22, 2024
December 14, 2023
December 6, 2023
October 28, 2023
October 19, 2023
September 28, 2023
July 18, 2023
June 28, 2023
June 7, 2023
March 7, 2023
February 16, 2023