Intersectional Bias

Intersectional bias in artificial intelligence focuses on identifying and mitigating unfairness stemming from the interplay of multiple sensitive attributes (e.g., race and gender) within machine learning models, rather than considering each attribute in isolation. Current research utilizes various techniques, including biased subgroup discovery methods (like Fairpriori), counterfactual example generation with diffusion models (like Stable Diffusion), and reweighting schemes to address data imbalances, to analyze and correct for these biases across diverse model architectures, such as large language models and vision-language models. This work is crucial for ensuring fairness and equity in AI applications, particularly in high-stakes domains like hiring, loan applications, and healthcare, where biased outcomes can have significant real-world consequences.

Papers