Multilingual Pre Trained Language Model
Multilingual pre-trained language models (mPLMs) aim to build language understanding capabilities across many languages simultaneously, leveraging massive multilingual datasets for training. Current research focuses on improving cross-lingual transfer, particularly for low-resource languages, through techniques like prompt engineering, data augmentation (including transliteration), and model adaptation methods such as adapter modules and knowledge distillation. These advancements are significant because they enable more efficient and effective natural language processing applications across a wider range of languages, impacting fields like machine translation, information retrieval, and cross-lingual understanding.
Papers
October 4, 2024
September 18, 2024
August 20, 2024
July 4, 2024
June 29, 2024
June 28, 2024
May 16, 2024
April 24, 2024
February 25, 2024
January 12, 2024
December 12, 2023
November 11, 2023
November 7, 2023
June 29, 2023
June 13, 2023
June 10, 2023
June 5, 2023
June 1, 2023
May 31, 2023
May 26, 2023