Bias Challenge
Bias in artificial intelligence models, particularly large language models, diffusion models, and vision-language models, is a significant challenge hindering their reliable and ethical deployment. Current research focuses on identifying and mitigating biases through various techniques, including developing new fairness metrics, analyzing the role of embedding spaces and knowledge graphs in bias propagation, and exploring methods like model pruning and counterfactual data augmentation to create more equitable models. Understanding and addressing these biases is crucial for ensuring fairness, preventing discrimination, and building trustworthy AI systems across diverse applications.
Papers
November 1, 2024
October 20, 2024
September 15, 2024
August 15, 2024
July 11, 2024
July 2, 2024
June 21, 2024
May 8, 2024
January 17, 2024
December 3, 2023
November 15, 2023
November 6, 2023
October 12, 2023
May 22, 2023