Cultural Background
Cultural background significantly influences how humans perceive and interact with the world, impacting everything from language and visual perception to online behavior and mental health. Current research focuses on understanding and mitigating cultural biases in artificial intelligence models, particularly large language models (LLMs) and vision-language models, using techniques like instruction tuning and fine-tuning with culturally relevant datasets. This work is crucial for developing more inclusive and equitable AI systems and for advancing our understanding of cross-cultural communication and cognition in both human and artificial systems.
Papers
Self-Pluralising Culture Alignment for Large Language Models
Shaoyang Xu, Yongqi Leng, Linhao Yu, Deyi Xiong
WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
Genta Indra Winata, Frederikus Hudi, Patrick Amadeus Irawan, David Anugraha, Rifki Afina Putri, Yutong Wang, Adam Nohejl, Ubaidillah Ariq Prathama, Nedjma Ousidhoum, Afifa Amriani, Anar Rzayev, Anirban Das, Ashmari Pramodya, Aulia Adila, Bryan Wilie, Candy Olivia Mawalim, Ching Lam Cheng, Daud Abolade, Emmanuele Chersoni, Enrico Santus, Fariz Ikhwantri, Garry Kuwanto, Hanyang Zhao, Haryo Akbarianto Wibowo, Holy Lovenia, Jan Christian Blaise Cruz, Jan Wira Gotama Putra, Junho Myung, Lucky Susanto, Maria Angelica Riera Machin, Marina Zhukova, Michael Anugraha, Muhammad Farid Adilazuarda, Natasha Santosa, Peerat Limkonchotiwat, Raj Dabre, Rio Alexander Audino, Samuel Cahyawijaya, Shi-Xiong Zhang, Stephanie Yulia Salim, Yi Zhou, Yinxuan Gui, David Ifeoluwa Adelani, En-Shiun Annie Lee, Shogo Okada, Ayu Purwarianti, Alham Fikri Aji, Taro Watanabe, Derry Tanti Wijaya, Alice Oh, Chong-Wah Ngo et al. (6 additional authors not shown) You must enabled JavaScript to view entire author list.