Cross Lingual Transferability

Cross-lingual transferability in natural language processing (NLP) focuses on enabling models trained on high-resource languages to effectively perform on low-resource languages, minimizing the need for extensive data annotation in each language. Current research investigates this transferability across various NLP tasks using multilingual large language models (LLMs) and explores techniques like preference tuning, contrastive learning, and in-context learning to improve performance. Understanding and enhancing cross-lingual transferability is crucial for broadening the accessibility and applicability of NLP technologies globally, particularly in areas with limited linguistic resources.

Papers