Cultural Bias

Cultural bias in large language models (LLMs) and vision-language models (VLMs) is a significant area of research focusing on identifying and mitigating the skewed representations of different cultures embedded within these AI systems. Current studies utilize various model architectures, including LLMs like GPT and Llama, and VLMs, to analyze biases across diverse tasks such as image understanding, text generation, and moral value assessment, often employing techniques like socio-demographic prompting and cross-cultural transfer learning. Understanding and addressing these biases is crucial for ensuring fairness, equity, and trustworthiness in AI applications, preventing the perpetuation of harmful stereotypes and promoting more inclusive technological development.

Papers