Paper ID: 2411.05040

Bottom-Up and Top-Down Analysis of Values, Agendas, and Observations in Corpora and LLMs

Scott E. Friedman, Noam Benkler, Drisana Mosaphir, Jeffrey Rye, Sonja M. Schmer-Galunder, Micah Goldwater, Matthew McLure, Ruta Wheelock, Jeremy Gottlieb, Robert P. Goldman, Christopher Miller

Large language models (LLMs) generate diverse, situated, persuasive texts from a plurality of potential perspectives, influenced heavily by their prompts and training data. As part of LLM adoption, we seek to characterize - and ideally, manage - the socio-cultural values that they express, for reasons of safety, accuracy, inclusion, and cultural fidelity. We present a validated approach to automatically (1) extracting heterogeneous latent value propositions from texts, (2) assessing resonance and conflict of values with texts, and (3) combining these operations to characterize the pluralistic value alignment of human-sourced and LLM-sourced textual data.

Submitted: Nov 6, 2024