Paper ID: 2202.10848
Speciesist bias in AI -- How AI applications perpetuate discrimination and unfair outcomes against animals
Thilo Hagendorff, Leonie Bossert, Tse Yip Fai, Peter Singer
Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is the first to describe the 'speciesist bias' and investigate it in several different AI systems. Speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. These patterns can be found in image recognition systems, large language models, and recommender systems. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. This can only be changed when AI fairness frameworks widen their scope and include mitigation measures for speciesist biases. This paper addresses the AI community in this regard and stresses the influence AI systems can have on either increasing or reducing the violence that is inflicted on animals, and especially on farmed animals.
Submitted: Feb 22, 2022