Cross Lingual Knowledge Transfer
Cross-lingual knowledge transfer aims to leverage knowledge learned from high-resource languages to improve the performance of natural language processing (NLP) models in low-resource languages. Current research focuses on adapting large language models (LLMs) and other architectures through techniques like parameter-efficient fine-tuning, knowledge editing, and data augmentation methods such as pseudo-semantic data augmentation and translation-based approaches. This research is significant because it addresses the critical issue of language imbalance in NLP, enabling the development of more inclusive and globally accessible NLP applications across diverse languages.
Papers
Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks
Nadezhda Chirkova, Vassilina Nikoulina
Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages
Yuanchi Zhang, Yile Wang, Zijun Liu, Shuo Wang, Xiaolong Wang, Peng Li, Maosong Sun, Yang Liu