Political Bias
Political bias in large language models (LLMs) is a burgeoning research area focused on identifying and mitigating the ideological leanings embedded within these systems. Current research utilizes various methods, including analyzing LLM responses to political prompts, comparing outputs across different languages and datasets, and employing techniques like parameter-efficient fine-tuning to align models with diverse viewpoints. Understanding and addressing this bias is crucial because LLMs are increasingly used in information dissemination and decision-making processes, potentially influencing public opinion and societal outcomes.
Papers
Identifying the sources of ideological bias in GPT models through linguistic variation in output
Christina Walker, Joan C. Timoneda
On the Relationship between Truth and Political Bias in Language Models
Suyash Fulay, William Brannon, Shrestha Mohanty, Cassandra Overney, Elinor Poole-Dayan, Deb Roy, Jad Kabbara
GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy
Jan Batzner, Volker Stocker, Stefan Schmid, Gjergji Kasneci
Examining the Influence of Political Bias on Large Language Model Performance in Stance Classification
Lynnette Hui Xian Ng, Iain Cruickshank, Roy Ka-Wei Lee