Multilingual Pre Trained Language Model
Multilingual pre-trained language models (mPLMs) aim to build language understanding capabilities across many languages simultaneously, leveraging massive multilingual datasets for training. Current research focuses on improving cross-lingual transfer, particularly for low-resource languages, through techniques like prompt engineering, data augmentation (including transliteration), and model adaptation methods such as adapter modules and knowledge distillation. These advancements are significant because they enable more efficient and effective natural language processing applications across a wider range of languages, impacting fields like machine translation, information retrieval, and cross-lingual understanding.
Papers
October 25, 2022
October 12, 2022
October 9, 2022
October 3, 2022
June 14, 2022
May 30, 2022
May 27, 2022
May 25, 2022
May 24, 2022
May 13, 2022
April 13, 2022
April 6, 2022
March 22, 2022
February 28, 2022
December 23, 2021