Cross Lingual Supervision

Cross-lingual supervision leverages parallel data from multiple languages to improve the performance of large language models (LLMs) across various tasks, particularly in low-resource languages. Current research focuses on techniques like selectively finetuning LLMs to mitigate catastrophic forgetting, augmenting training data with cross-lingual entities, and developing methods to bridge models specialized in language understanding and reasoning. These advancements enhance multilingual capabilities, enabling improved machine translation, question answering, and reasoning tasks, ultimately fostering more inclusive and effective natural language processing applications.

Papers