Prompt Bias
Prompt bias, the tendency of prompts used to interact with large language models (LLMs) and other AI systems to introduce or amplify existing biases in the model's output, is a significant area of current research. Studies focus on identifying and mitigating this bias across various model architectures, including text-to-image generators and those used for factual knowledge extraction, employing techniques like prompt modification, representation-based debiasing, and causal intervention. Understanding and addressing prompt bias is crucial for ensuring fairness, reliability, and ethical use of AI systems in diverse applications, impacting both the development of more robust models and the trustworthiness of AI-generated content.
Papers
December 19, 2024
June 9, 2024
March 15, 2024
November 29, 2023
September 18, 2023
January 31, 2023