Multilingual Model
Multilingual models aim to process and generate text across multiple languages, overcoming limitations of monolingual approaches and expanding access to natural language processing (NLP) for low-resource languages. Current research focuses on improving the performance of these models, particularly for low-resource languages, using architectures like transformer-based models (e.g., BERT, mT5) and exploring techniques such as instruction tuning, knowledge distillation, and targeted multilingual adaptation. This work is significant because it addresses biases inherent in predominantly English-centric models and enables broader access to NLP tools and applications across diverse linguistic communities.
Papers
ParaNames: A Massively Multilingual Entity Name Corpus
Jonne Sälevä, Constantine Lignos
Cross-Lingual Text Classification with Multilingual Distillation and Zero-Shot-Aware Training
Ziqing Yang, Yiming Cui, Zhigang Chen, Shijin Wang
CINO: A Chinese Minority Pre-trained Language Model
Ziqing Yang, Zihang Xu, Yiming Cui, Baoxin Wang, Min Lin, Dayong Wu, Zhigang Chen