Monolingual Model
Monolingual models, trained exclusively on a single language's data, offer a counterpoint to multilingual models in natural language processing. Research currently focuses on comparing their performance against multilingual counterparts across various tasks, including speech recognition, sentiment analysis, and named entity recognition, often employing transformer-based architectures like BERT and its variants. This comparative approach aims to determine the optimal model type for specific languages and tasks, considering factors like resource availability and the need to mitigate biases or security vulnerabilities. The findings inform the development of more effective and ethical NLP systems for diverse languages and applications.
Papers
December 23, 2022
December 16, 2022
November 25, 2022
November 10, 2022
November 9, 2022
November 8, 2022
October 25, 2022
October 17, 2022
October 13, 2022
October 11, 2022
September 22, 2022
September 14, 2022
August 29, 2022
August 16, 2022
July 7, 2022
June 8, 2022
June 7, 2022
May 20, 2022
April 19, 2022
March 30, 2022