Bias Metric
Bias metrics in artificial intelligence aim to quantify and identify unfair biases embedded within models and datasets, primarily focusing on demographic attributes like gender and race. Current research emphasizes developing more robust and nuanced metrics that go beyond simple comparisons of group performance, exploring methods like those based on allocational harms, implicit association tests, and region-specific biases, often applied to large language models and visual transformers. This work is crucial for ensuring fairness and mitigating potential harms in AI systems deployed across various high-stakes applications, from hiring to healthcare, by providing tools for evaluating and improving model equity.
Papers
November 6, 2024
November 5, 2024
October 26, 2024
October 24, 2024
October 21, 2024
September 14, 2024
August 16, 2024
August 7, 2024
August 2, 2024
July 24, 2024
July 1, 2024
June 25, 2024
June 23, 2024
June 6, 2024
May 30, 2024
April 23, 2024
April 10, 2024
April 7, 2024
March 30, 2024
December 26, 2023