Cultural Bias
Cultural bias in large language models (LLMs) and vision-language models (VLMs) is a significant area of research focusing on identifying and mitigating the skewed representations of different cultures embedded within these AI systems. Current studies utilize various model architectures, including LLMs like GPT and Llama, and VLMs, to analyze biases across diverse tasks such as image understanding, text generation, and moral value assessment, often employing techniques like socio-demographic prompting and cross-cultural transfer learning. Understanding and addressing these biases is crucial for ensuring fairness, equity, and trustworthiness in AI applications, preventing the perpetuation of harmful stereotypes and promoting more inclusive technological development.
Papers
See It from My Perspective: Diagnosing the Western Cultural Bias of Large Vision-Language Models in Image Understanding
Amith Ananthram, Elias Stengel-Eskin, Carl Vondrick, Mohit Bansal, Kathleen McKeown
Cultural Conditioning or Placebo? On the Effectiveness of Socio-Demographic Prompting
Sagnik Mukherjee, Muhammad Farid Adilazuarda, Sunayana Sitaram, Kalika Bali, Alham Fikri Aji, Monojit Choudhury