Multilingual Large Language Model
Multilingual large language models (MLLMs) aim to extend the capabilities of large language models to multiple languages, improving cross-lingual understanding and generation. Current research focuses on enhancing performance for low-resource languages through techniques like continued pre-training on massive multilingual datasets, parameter-efficient fine-tuning with knowledge graphs, and mitigating biases and improving factual accuracy across languages. These advancements are significant for bridging the language gap in AI applications, fostering inclusivity, and enabling more equitable access to advanced language technologies globally.
Papers
December 19, 2024
November 27, 2024
November 25, 2024
November 17, 2024
November 10, 2024
November 6, 2024
November 2, 2024
October 31, 2024
October 28, 2024
October 26, 2024
October 24, 2024
October 23, 2024
October 22, 2024
October 21, 2024
October 18, 2024
October 15, 2024
October 14, 2024