Masked Language
Masked language modeling (MLM) is a self-supervised learning technique for training language models by masking and predicting words in a sentence. Current research focuses on improving MLM's efficiency and effectiveness through novel masking strategies, enhanced model architectures (like incorporating decoders into encoder-only models), and the development of more robust evaluation metrics for assessing biases and performance across diverse tasks and languages. These advancements are significant because they lead to more accurate and less biased language models with broader applications in natural language processing, including machine translation, text generation, and question answering.
Papers
October 31, 2024
October 10, 2024
October 4, 2024
August 5, 2024
July 22, 2024
June 30, 2024
June 19, 2024
June 9, 2024
June 4, 2024
April 19, 2024
April 9, 2024
April 5, 2024
March 26, 2024
February 27, 2024
February 21, 2024
February 15, 2024
January 29, 2024
January 28, 2024