Racial Bias
Racial bias in artificial intelligence (AI) systems is a significant concern, focusing on how algorithms trained on biased data perpetuate and amplify existing societal inequalities across various applications. Current research investigates bias in diverse AI models, including large language models (LLMs), multimodal foundation models, and computer vision systems, often employing techniques like fairness metrics and debiasing methods to assess and mitigate these biases. Understanding and addressing this bias is crucial for ensuring fairness and equity in AI applications, particularly in sensitive areas like healthcare, criminal justice, and loan applications, and for advancing the development of more responsible and ethical AI systems.
37papers
Papers
April 7, 2025
March 21, 2025
January 20, 2025
January 15, 2025
December 24, 2024
October 22, 2024
October 21, 2024
September 22, 2024
August 27, 2024
July 26, 2024
June 19, 2024
May 1, 2024