Multilingual Large Language Model

Multilingual large language models (MLLMs) aim to extend the capabilities of large language models to multiple languages, improving cross-lingual understanding and generation. Current research focuses on enhancing performance for low-resource languages through techniques like continued pre-training on massive multilingual datasets, parameter-efficient fine-tuning with knowledge graphs, and mitigating biases and improving factual accuracy across languages. These advancements are significant for bridging the language gap in AI applications, fostering inclusivity, and enabling more equitable access to advanced language technologies globally.

Papers