Bias Metric
Bias metrics in artificial intelligence aim to quantify and identify unfair biases embedded within models and datasets, primarily focusing on demographic attributes like gender and race. Current research emphasizes developing more robust and nuanced metrics that go beyond simple comparisons of group performance, exploring methods like those based on allocational harms, implicit association tests, and region-specific biases, often applied to large language models and visual transformers. This work is crucial for ensuring fairness and mitigating potential harms in AI systems deployed across various high-stakes applications, from hiring to healthcare, by providing tools for evaluating and improving model equity.
Papers
December 11, 2023
November 23, 2023
October 18, 2023
October 13, 2023
May 30, 2023
May 24, 2023
April 20, 2023
April 10, 2023
April 8, 2023
February 8, 2023
January 28, 2023
January 2, 2023
November 15, 2022
October 6, 2022
June 3, 2022
June 1, 2022
April 14, 2022
March 24, 2022
November 15, 2021