Bias Mitigation Algorithm
Bias mitigation algorithms aim to remove or reduce unfair biases present in machine learning models, ensuring equitable outcomes across different demographic groups. Current research focuses on developing pre-processing and post-processing techniques, including data augmentation strategies like mixup and novel dropout methods applied during inference, as well as algorithms that minimize discrimination without requiring sensitive attribute information. These advancements are crucial for improving the fairness and trustworthiness of AI systems across various applications, from healthcare and finance to criminal justice, mitigating potential harm caused by biased predictions.
Papers
October 2, 2024
July 5, 2024
May 20, 2024
April 30, 2024
December 26, 2023
November 10, 2023
June 15, 2023
February 17, 2023
October 4, 2022
December 13, 2021