PLM Based
Pre-trained language models (PLMs) are transforming numerous fields by offering powerful, adaptable tools for natural language processing and beyond. Current research focuses on improving PLM efficiency (e.g., through pruning, adapter modules, and smaller model variants), enhancing their knowledge representation and recall (including addressing factual hallucination), and adapting them to specific domains and tasks (such as biomedical applications or low-resource languages). These advancements are significantly impacting various scientific communities and practical applications, from improving machine translation and question answering to enabling more efficient autonomous vehicle control and personalized mobile language models.
Papers
October 10, 2022
September 15, 2022
June 13, 2022
April 5, 2022
March 14, 2022