Racial Bias
Racial bias in artificial intelligence (AI) systems is a significant concern, focusing on how algorithms trained on biased data perpetuate and amplify existing societal inequalities across various applications. Current research investigates bias in diverse AI models, including large language models (LLMs), multimodal foundation models, and computer vision systems, often employing techniques like fairness metrics and debiasing methods to assess and mitigate these biases. Understanding and addressing this bias is crucial for ensuring fairness and equity in AI applications, particularly in sensitive areas like healthcare, criminal justice, and loan applications, and for advancing the development of more responsible and ethical AI systems.
Papers
October 22, 2024
October 21, 2024
September 22, 2024
August 27, 2024
August 5, 2024
July 26, 2024
July 19, 2024
June 30, 2024
June 19, 2024
May 21, 2024
May 1, 2024
March 16, 2024
February 8, 2024
January 25, 2024
November 25, 2023
November 16, 2023
November 10, 2023
November 6, 2023
September 29, 2023
August 25, 2023