Monolingual Model
Monolingual models, trained exclusively on a single language's data, offer a counterpoint to multilingual models in natural language processing. Research currently focuses on comparing their performance against multilingual counterparts across various tasks, including speech recognition, sentiment analysis, and named entity recognition, often employing transformer-based architectures like BERT and its variants. This comparative approach aims to determine the optimal model type for specific languages and tasks, considering factors like resource availability and the need to mitigate biases or security vulnerabilities. The findings inform the development of more effective and ethical NLP systems for diverse languages and applications.
Papers
October 23, 2024
October 18, 2024
October 11, 2024
September 17, 2024
August 24, 2024
August 21, 2024
July 26, 2024
July 23, 2024
July 22, 2024
July 8, 2024
June 18, 2024
June 4, 2024
May 21, 2024
April 12, 2024
April 2, 2024
March 19, 2024
March 18, 2024
March 11, 2024
February 19, 2024
January 29, 2024