Political Bias
Political bias in large language models (LLMs) is a burgeoning research area focused on identifying and mitigating the ideological leanings embedded within these systems. Current research utilizes various methods, including analyzing LLM responses to political prompts, comparing outputs across different languages and datasets, and employing techniques like parameter-efficient fine-tuning to align models with diverse viewpoints. Understanding and addressing this bias is crucial because LLMs are increasingly used in information dissemination and decision-making processes, potentially influencing public opinion and societal outcomes.
Papers
Inducing Political Bias Allows Language Models Anticipate Partisan Reactions to Controversies
Zihao He, Siyi Guo, Ashwin Rao, Kristina Lerman
The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-based Agents
Yun-Shiuan Chuang, Siddharth Suresh, Nikunj Harlalka, Agam Goyal, Robert Hawkins, Sijia Yang, Dhavan Shah, Junjie Hu, Timothy T. Rogers