Multilingual Language Model
Multilingual language models (MLLMs) aim to create AI systems capable of understanding and generating text across multiple languages, overcoming the limitations of English-centric models. Current research focuses on improving MLLM performance in low-resource languages, mitigating biases towards dominant languages, and developing techniques for efficient knowledge editing and unlearning to address privacy and ethical concerns. These advancements are crucial for broadening access to AI-powered tools and fostering more equitable and inclusive natural language processing applications globally.
Papers
May 15, 2023
May 13, 2023
May 8, 2023
May 4, 2023
May 3, 2023
April 28, 2023
April 22, 2023
April 18, 2023
Romanization-based Large-scale Adaptation of Multilingual Language Models
Sukannya Purkayastha, Sebastian Ruder, Jonas Pfeiffer, Iryna Gurevych, Ivan Vulić
Transfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese
Vésteinn Snæbjarnarson, Annika Simonsen, Goran Glavaš, Ivan Vulić
April 12, 2023
April 3, 2023
March 23, 2023
March 7, 2023
February 24, 2023
February 22, 2023
February 9, 2023
January 25, 2023
December 21, 2022
December 15, 2022