Multilingual Capability
Multilingual capability in large language models (LLMs) focuses on developing models that perform well across many languages, addressing the current dominance of English-centric systems. Research actively explores techniques like multilingual instruction tuning, continual pre-training, and manipulation of internal language representations to improve performance, particularly for low-resource languages, while mitigating issues like catastrophic forgetting and bias. This field is crucial for broadening AI accessibility globally and fostering equitable access to advanced AI services, impacting both scientific understanding of language representation and the development of inclusive real-world applications.
Papers
January 13, 2025
December 17, 2024
December 14, 2024
December 4, 2024
November 30, 2024
November 25, 2024
November 14, 2024
November 7, 2024
October 23, 2024
October 21, 2024
October 16, 2024
October 11, 2024
October 8, 2024
October 6, 2024
September 26, 2024
September 22, 2024
September 6, 2024
August 12, 2024
July 15, 2024
June 24, 2024