Transformer Based Pre Trained Language
Transformer-based pre-trained language models (PLMs) are revolutionizing natural language processing by learning powerful representations from massive text corpora. Current research focuses on improving efficiency (e.g., through quantization and pruning), addressing biases, and enhancing capabilities for diverse tasks (e.g., hate speech detection, reasoning, and multi-task learning) using architectures like BERT, T5, and GPT variants. These advancements are significantly impacting both the scientific understanding of language and the development of practical applications across various domains, including improved machine translation, question answering systems, and more responsible AI deployments.
Papers
May 23, 2024
April 1, 2024
February 22, 2024
January 3, 2024
December 24, 2023
September 19, 2023
August 16, 2023
May 30, 2023
May 26, 2023
May 22, 2023
May 21, 2023
May 19, 2023
May 16, 2023
May 8, 2023
April 24, 2023
January 24, 2023
November 14, 2022
October 15, 2022