Cross Lingual Transfer
Cross-lingual transfer aims to leverage knowledge learned from high-resource languages to improve performance on low-resource languages in natural language processing tasks. Current research focuses on adapting large language models (LLMs) for cross-lingual transfer, employing techniques like model merging, data augmentation (including synthetic data generation and transliteration), and innovative training strategies such as in-context learning and continual pre-training. This research is crucial for expanding the reach of NLP to a wider range of languages, enabling applications like multilingual question answering, sentiment analysis, and code generation to benefit diverse communities globally.
Papers
July 18, 2024
July 16, 2024
July 15, 2024
July 13, 2024
July 4, 2024
July 2, 2024
July 1, 2024
June 29, 2024
June 28, 2024
June 27, 2024
June 25, 2024
June 19, 2024
June 18, 2024
June 17, 2024
June 16, 2024
June 6, 2024
May 17, 2024
May 16, 2024