Multilingual Bias
Multilingual bias in artificial intelligence models refers to the systematic errors and unfairness exhibited by these systems across different languages, often reflecting and amplifying biases present in their training data. Current research focuses on identifying and quantifying these biases in various model architectures, including large language models and vision-language models, using newly developed multilingual datasets and evaluation metrics. Understanding and mitigating these biases is crucial for ensuring fairness and equity in AI applications, impacting fields ranging from machine translation and speech recognition to social media analysis and beyond.
Papers
August 20, 2024
July 2, 2024
June 29, 2024
March 26, 2024
December 23, 2023
May 22, 2023
May 1, 2022
April 7, 2022