Racial Bias

Racial bias in artificial intelligence (AI) systems is a significant concern, focusing on how algorithms trained on biased data perpetuate and amplify existing societal inequalities across various applications. Current research investigates bias in diverse AI models, including large language models (LLMs), multimodal foundation models, and computer vision systems, often employing techniques like fairness metrics and debiasing methods to assess and mitigate these biases. Understanding and addressing this bias is crucial for ensuring fairness and equity in AI applications, particularly in sensitive areas like healthcare, criminal justice, and loan applications, and for advancing the development of more responsible and ethical AI systems.

Papers