Bias Metric

Bias metrics in artificial intelligence aim to quantify and identify unfair biases embedded within models and datasets, primarily focusing on demographic attributes like gender and race. Current research emphasizes developing more robust and nuanced metrics that go beyond simple comparisons of group performance, exploring methods like those based on allocational harms, implicit association tests, and region-specific biases, often applied to large language models and visual transformers. This work is crucial for ensuring fairness and mitigating potential harms in AI systems deployed across various high-stakes applications, from hiring to healthcare, by providing tools for evaluating and improving model equity.

Papers