Monolingual Language Model
Monolingual language models (LLMs) are trained on a single language, aiming to improve performance and efficiency compared to multilingual counterparts, especially for low-resource languages lacking extensive training data. Current research focuses on developing effective training strategies for these models, including techniques like trans-tokenization for efficient language adaptation and optimized training recipes for multi-vector retrievers, often employing transformer architectures like BERT and its variants. This work is significant because it addresses the limitations of multilingual models in low-resource settings, providing valuable baselines and improved performance on various downstream tasks, ultimately advancing natural language processing capabilities for a wider range of languages.