Language Bias
Language bias in artificial intelligence, particularly large language models (LLMs), refers to the systematic tendency of these systems to reflect and amplify existing societal biases present in their training data. Current research focuses on identifying and mitigating these biases across various domains, including sentiment analysis, hate speech detection, and visual question answering, often employing techniques like adversarial training, bias probing, and prompt engineering with transformer-based models like BERT and RoBERTa. Understanding and addressing language bias is crucial for ensuring fairness, equity, and trustworthiness in AI systems, impacting not only the scientific community's understanding of AI limitations but also the ethical development and deployment of AI technologies in real-world applications.
Papers
Performance in a dialectal profiling task of LLMs for varieties of Brazilian Portuguese
Raquel Meister Ko Freitag, Túlio Sousa de Gois
Eliminating the Language Bias for Visual Question Answering with fine-grained Causal Intervention
Ying Liu, Ge Bai, Chenji Lu, Shilong Li, Zhang Zhang, Ruifang Liu, Wenbin Guo