Cross Lingual Supervision
Cross-lingual supervision leverages parallel data from multiple languages to improve the performance of large language models (LLMs) across various tasks, particularly in low-resource languages. Current research focuses on techniques like selectively finetuning LLMs to mitigate catastrophic forgetting, augmenting training data with cross-lingual entities, and developing methods to bridge models specialized in language understanding and reasoning. These advancements enhance multilingual capabilities, enabling improved machine translation, question answering, and reasoning tasks, ultimately fostering more inclusive and effective natural language processing applications.
Papers
September 29, 2024
February 18, 2024
January 19, 2024
January 11, 2024
May 19, 2023