Paper ID: 2401.03030
Exploring Gender Biases in Language Patterns of Human-Conversational Agent Conversations
Weizi Liu
With the rise of human-machine communication, machines are increasingly designed with humanlike characteristics, such as gender, which can inadvertently trigger cognitive biases. Many conversational agents (CAs), such as voice assistants and chatbots, default to female personas, leading to concerns about perpetuating gender stereotypes and inequality. Critiques have emerged regarding the potential objectification of females and reinforcement of gender stereotypes by these technologies. This research, situated in conversational AI design, aims to delve deeper into the impacts of gender biases in human-CA interactions. From a behavioral and communication research standpoint, this program focuses not only on perceptions but also the linguistic styles of users when interacting with CAs, as previous research has rarely explored. It aims to understand how pre-existing gender biases might be triggered by CAs' gender designs. It further investigates how CAs' gender designs may reinforce gender biases and extend them to human-human communication. The findings aim to inform ethical design of conversational agents, addressing whether gender assignment in CAs is appropriate and how to promote gender equality in design.
Submitted: Jan 5, 2024