Behavioral Bias
Behavioral biases in large language models (LLMs) and vision-language models (LVLMs) are a growing area of research, focusing on how these models inherit and potentially amplify human biases present in their training data. Current studies utilize game theory and utility theory frameworks to quantify and compare the economic and strategic decision-making biases of various models, including open-source and proprietary architectures like GPT-4. This research is crucial for mitigating the harmful societal impacts of biased AI systems and ensuring responsible deployment in applications ranging from finance to content generation, particularly given the observed inconsistencies in bias across languages and model sizes. The development of bias detection and mitigation techniques, such as Fair Diffusion, is a key focus to improve fairness and reliability.