Cross Lingual Transfer
Cross-lingual transfer aims to leverage knowledge learned from high-resource languages to improve performance on low-resource languages in natural language processing tasks. Current research focuses on adapting large language models (LLMs) for cross-lingual transfer, employing techniques like model merging, data augmentation (including synthetic data generation and transliteration), and innovative training strategies such as in-context learning and continual pre-training. This research is crucial for expanding the reach of NLP to a wider range of languages, enabling applications like multilingual question answering, sentiment analysis, and code generation to benefit diverse communities globally.
Papers
Romanization-based Large-scale Adaptation of Multilingual Language Models
Sukannya Purkayastha, Sebastian Ruder, Jonas Pfeiffer, Iryna Gurevych, Ivan Vulić
Transfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese
Vésteinn Snæbjarnarson, Annika Simonsen, Goran Glavaš, Ivan Vulić