Multilingual Safety
Multilingual safety in large language models (LLMs) focuses on ensuring these models generate safe and unbiased outputs across diverse languages, addressing concerns about harmful content generation and biases amplified by language differences. Current research emphasizes developing comprehensive multilingual safety benchmarks and evaluation toolkits to assess LLMs' performance across various languages and safety categories, including toxicity, misinformation, and jailbreaking vulnerabilities. This research is crucial for responsible AI deployment globally, mitigating risks associated with unequal access to safe AI technology and promoting equitable access to AI benefits worldwide.
Papers
September 24, 2024
September 6, 2024
August 7, 2024
June 24, 2024
April 22, 2024
October 3, 2023