Bias Label
Bias in labeled data significantly impacts the fairness and accuracy of machine learning models, leading to research focused on mitigating this issue. Current efforts concentrate on developing algorithms and model architectures that identify and correct for bias, often employing techniques like pseudo-labeling refinement, logit adjustments, and contrastive learning, sometimes without requiring explicit bias annotations. This research is crucial for ensuring the reliability and ethical deployment of machine learning systems across various applications, particularly in sensitive domains like healthcare and criminal justice, where biased models can have severe real-world consequences.
Papers
Hollywood Identity Bias Dataset: A Context Oriented Bias Analysis of Movie Dialogues
Sandhya Singh, Prapti Roy, Nihar Sahoo, Niteesh Mallela, Himanshu Gupta, Pushpak Bhattacharyya, Milind Savagaonkar, Nidhi, Roshni Ramnani, Anutosh Maitra, Shubhashis Sengupta
Mitigating Dataset Bias by Using Per-sample Gradient
Sumyeong Ahn, Seongyoon Kim, Se-young Yun