Cross CUltural Understanding Benchmark
Cross-cultural understanding benchmarks evaluate the ability of artificial intelligence models, particularly vision-language and text-to-image models, to accurately and respectfully represent diverse cultures. Current research focuses on developing benchmarks that assess not only image realism and aesthetic quality but also cultural awareness and diversity, often employing techniques like self-contrastive fine-tuning and incorporating structured knowledge bases and large language models. These efforts aim to mitigate biases and stereotypes in AI-generated content, leading to more equitable and inclusive applications across various domains, including assistive technologies and media representation.
Papers
September 25, 2024
July 9, 2024
July 8, 2024
June 24, 2024
February 8, 2024
January 16, 2024
January 28, 2023