Multilingual Bias

Multilingual bias in artificial intelligence models refers to the systematic errors and unfairness exhibited by these systems across different languages, often reflecting and amplifying biases present in their training data. Current research focuses on identifying and quantifying these biases in various model architectures, including large language models and vision-language models, using newly developed multilingual datasets and evaluation metrics. Understanding and mitigating these biases is crucial for ensuring fairness and equity in AI applications, impacting fields ranging from machine translation and speech recognition to social media analysis and beyond.

Papers