Representational Harm

Representational harm in AI refers to the ways in which artificial intelligence systems, particularly large language models (LLMs) and text-to-image generators, perpetuate or amplify existing societal biases, leading to unfair or inaccurate representations of certain groups. Current research focuses on identifying and measuring these harms across various modalities (text, image, audio), often examining biases related to gender, ethnicity, and socioeconomic status within specific model architectures like contrastive learning and LLMs. Understanding and mitigating representational harm is crucial for ensuring fairness, equity, and the responsible development and deployment of AI systems across diverse applications, impacting both the scientific understanding of AI bias and its practical ethical implications.

Papers