Masked Language
Masked language modeling (MLM) is a self-supervised learning technique for training language models by masking and predicting words in a sentence. Current research focuses on improving MLM's efficiency and effectiveness through novel masking strategies, enhanced model architectures (like incorporating decoders into encoder-only models), and the development of more robust evaluation metrics for assessing biases and performance across diverse tasks and languages. These advancements are significant because they lead to more accurate and less biased language models with broader applications in natural language processing, including machine translation, text generation, and question answering.
Papers
September 8, 2022
September 7, 2022
August 2, 2022
July 27, 2022
June 30, 2022
May 23, 2022
May 20, 2022
May 11, 2022
May 1, 2022
April 30, 2022
April 7, 2022
March 30, 2022
March 17, 2022
March 16, 2022
March 10, 2022
March 8, 2022
February 16, 2022
November 18, 2021