Pre Trained Language
Pre-trained language models (PLMs) are revolutionizing natural language processing by providing powerful, general-purpose representations of text. Current research focuses on improving PLM efficiency (e.g., through model compression and parameter-efficient tuning), addressing biases and limitations in factual knowledge and reasoning capabilities, and enhancing their performance on specific downstream tasks like question answering and text generation via techniques such as instruction tuning and prompt engineering. These advancements are significantly impacting various fields, enabling more efficient and effective applications in areas ranging from hate speech detection to knowledge graph completion and biomedical document retrieval.
Papers
October 17, 2024
July 13, 2024
February 3, 2024
February 2, 2024
December 30, 2023
September 12, 2023
August 31, 2023
August 18, 2023
July 4, 2023
June 19, 2023
June 8, 2023
May 19, 2023
May 15, 2023
May 7, 2023
May 6, 2023
February 24, 2023
December 26, 2022
December 15, 2022
November 13, 2022
November 4, 2022