Cultural Value
Research on cultural values in large language models (LLMs) focuses on understanding how these models encode and express cultural biases, aiming to improve their alignment with diverse cultural norms. Current work utilizes various LLMs, analyzing their responses to prompts designed to elicit value-based judgments across different languages and cultures, often employing frameworks like Hofstede's cultural dimensions. This research is crucial for mitigating biases in AI systems and ensuring equitable access to and fair treatment by AI across diverse populations, impacting both the development of more responsible AI and its ethical deployment in real-world applications.
Papers
November 9, 2024
October 15, 2024
August 29, 2024
June 21, 2024
June 17, 2024
May 22, 2024
May 21, 2024
May 6, 2024
April 25, 2024
February 21, 2024
December 29, 2023
November 23, 2023
November 19, 2022
March 15, 2022