PLM Based
Pre-trained language models (PLMs) are transforming numerous fields by offering powerful, adaptable tools for natural language processing and beyond. Current research focuses on improving PLM efficiency (e.g., through pruning, adapter modules, and smaller model variants), enhancing their knowledge representation and recall (including addressing factual hallucination), and adapting them to specific domains and tasks (such as biomedical applications or low-resource languages). These advancements are significantly impacting various scientific communities and practical applications, from improving machine translation and question answering to enabling more efficient autonomous vehicle control and personalized mobile language models.
Papers
October 3, 2024
July 23, 2024
July 4, 2024
June 18, 2024
June 11, 2024
April 26, 2024
April 8, 2024
March 3, 2024
January 3, 2024
September 26, 2023
September 15, 2023
June 25, 2023
May 30, 2023
February 13, 2023
December 20, 2022
December 7, 2022
November 13, 2022
October 29, 2022