Multilingual Large Language Model
Multilingual large language models (MLLMs) aim to extend the capabilities of large language models to multiple languages, improving cross-lingual understanding and generation. Current research focuses on enhancing performance for low-resource languages through techniques like continued pre-training on massive multilingual datasets, parameter-efficient fine-tuning with knowledge graphs, and mitigating biases and improving factual accuracy across languages. These advancements are significant for bridging the language gap in AI applications, fostering inclusivity, and enabling more equitable access to advanced language technologies globally.
Papers
Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM
Zheng Wei Lim, Nitish Gupta, Honglin Yu, Trevor Cohn
Kalahi: A handcrafted, grassroots cultural LLM evaluation suite for Filipino
Jann Railey Montalan, Jian Gang Ngui, Wei Qi Leong, Yosephine Susanto, Hamsawardhini Rengarajan, Alham Fikri Aji, William Chandra Tjhi