Cross Lingual Transferability
Cross-lingual transferability in natural language processing (NLP) focuses on enabling models trained on high-resource languages to effectively perform on low-resource languages, minimizing the need for extensive data annotation in each language. Current research investigates this transferability across various NLP tasks using multilingual large language models (LLMs) and explores techniques like preference tuning, contrastive learning, and in-context learning to improve performance. Understanding and enhancing cross-lingual transferability is crucial for broadening the accessibility and applicability of NLP technologies globally, particularly in areas with limited linguistic resources.
Papers
November 7, 2024
July 12, 2024
June 24, 2024
June 23, 2024
March 5, 2024
November 14, 2023
November 12, 2023
September 19, 2023
June 26, 2023
May 24, 2023
May 23, 2023
May 15, 2023
April 17, 2023
January 3, 2023
October 18, 2022
September 22, 2022
July 19, 2022
May 25, 2022
March 21, 2022