Chinese Pre Trained Model
Chinese pre-trained language models are rapidly advancing, aiming to improve natural language processing capabilities for the Chinese language. Research focuses on developing smaller, faster models while maintaining high accuracy, incorporating word-level semantics to enhance character-based representations, and addressing challenges like handling word insertion and deletion errors. These advancements are crucial for expanding access to powerful NLP tools and fostering progress in various applications, including machine reading comprehension, text classification, and cross-modal tasks involving image and text.
Papers
April 3, 2023
September 7, 2022
July 13, 2022
April 26, 2022
February 14, 2022
December 23, 2021