Masked Language
Masked language modeling (MLM) is a self-supervised learning technique for training language models by masking and predicting words in a sentence. Current research focuses on improving MLM's efficiency and effectiveness through novel masking strategies, enhanced model architectures (like incorporating decoders into encoder-only models), and the development of more robust evaluation metrics for assessing biases and performance across diverse tasks and languages. These advancements are significant because they lead to more accurate and less biased language models with broader applications in natural language processing, including machine translation, text generation, and question answering.
Papers
January 21, 2024
November 27, 2023
November 23, 2023
October 27, 2023
October 26, 2023
October 25, 2023
October 19, 2023
October 18, 2023
October 11, 2023
September 25, 2023
September 13, 2023
August 13, 2023
July 29, 2023
July 20, 2023
June 8, 2023
June 6, 2023
June 1, 2023
May 28, 2023