Multilingual Language Model
Multilingual language models (MLLMs) aim to create AI systems capable of understanding and generating text across multiple languages, overcoming the limitations of English-centric models. Current research focuses on improving MLLM performance in low-resource languages, mitigating biases towards dominant languages, and developing techniques for efficient knowledge editing and unlearning to address privacy and ethical concerns. These advancements are crucial for broadening access to AI-powered tools and fostering more equitable and inclusive natural language processing applications globally.
Papers
Exploring the Relationship between Alignment and Cross-lingual Transfer in Multilingual Transformers
Félix Gaschi, Patricio Cerda, Parisa Rastin, Yannick Toussaint
Cross-Lingual Transfer Learning for Phrase Break Prediction with Multilingual Language Model
Hoyeon Lee, Hyun-Wook Yoon, Jong-Hwan Kim, Jae-Min Kim
Tokenization Impacts Multilingual Language Modeling: Assessing Vocabulary Allocation and Overlap Across Languages
Tomasz Limisiewicz, Jiří Balhar, David Mareček
Free Lunch: Robust Cross-Lingual Transfer via Model Checkpoint Averaging
Fabian David Schmidt, Ivan Vulić, Goran Glavaš
Towards a Common Understanding of Contributing Factors for Cross-Lingual Transfer in Multilingual Language Models: A Review
Fred Philippy, Siwen Guo, Shohreh Haddadan
Romanization-based Large-scale Adaptation of Multilingual Language Models
Sukannya Purkayastha, Sebastian Ruder, Jonas Pfeiffer, Iryna Gurevych, Ivan Vulić
Transfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese
Vésteinn Snæbjarnarson, Annika Simonsen, Goran Glavaš, Ivan Vulić