Chinese Language
Research on the Chinese language is rapidly advancing, driven by the need to improve large language models' (LLMs) understanding and generation capabilities across diverse Chinese varieties, including Mandarin, Classical Chinese, and Taiwanese Hokkien. Current efforts focus on developing comprehensive benchmarks for evaluating LLMs on various tasks, from basic understanding to complex reasoning, and employing techniques like continual pre-training, instruction tuning, and knowledge grounding to enhance model performance. These advancements are crucial for bridging the resource gap in low-resource languages, improving cross-lingual understanding, and enabling new applications in education, cultural preservation, and other fields.
Papers
November 9, 2024
August 28, 2024
July 26, 2024
July 4, 2024
May 28, 2024
May 14, 2024
March 29, 2024
March 18, 2024
March 11, 2024
March 4, 2024
December 6, 2023
October 8, 2023
February 25, 2023
October 25, 2022
June 29, 2022
December 27, 2021