Racial Bias
Racial bias in artificial intelligence (AI) systems is a significant concern, focusing on how algorithms trained on biased data perpetuate and amplify existing societal inequalities across various applications. Current research investigates bias in diverse AI models, including large language models (LLMs), multimodal foundation models, and computer vision systems, often employing techniques like fairness metrics and debiasing methods to assess and mitigate these biases. Understanding and addressing this bias is crucial for ensuring fairness and equity in AI applications, particularly in sensitive areas like healthcare, criminal justice, and loan applications, and for advancing the development of more responsible and ethical AI systems.
Papers
August 9, 2023
July 10, 2023
July 4, 2023
May 30, 2023
May 25, 2023
May 22, 2023
May 21, 2023
May 1, 2023
April 17, 2023
March 29, 2023
December 20, 2022
October 11, 2022
September 28, 2022
August 16, 2022
August 13, 2022
July 27, 2022
May 17, 2022
April 26, 2022
January 22, 2022