Cross Lingual
Cross-lingual research focuses on bridging language barriers in natural language processing, aiming to build models that understand and process text across multiple languages. Current efforts concentrate on improving multilingual large language models (LLMs) through techniques like continual pre-training, adapter modules, and contrastive learning, often addressing challenges related to low-resource languages and semantic alignment. This field is crucial for expanding access to NLP technologies globally and enabling cross-cultural communication and information exchange in diverse applications, such as machine translation, sentiment analysis, and cross-lingual information retrieval.
Papers
Harnessing Cross-lingual Features to Improve Cognate Detection for Low-resource Languages
Diptesh Kanojia, Raj Dabre, Shubham Dewangan, Pushpak Bhattacharyya, Gholamreza Haffari, Malhar Kulkarni
DOCmT5: Document-Level Pretraining of Multilingual Language Models
Chia-Hsuan Lee, Aditya Siddhant, Viresh Ratnakar, Melvin Johnson
A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining
Gerhard Johann Hagerer, Wing Sheung Leung, Qiaoxi Liu, Hannah Danner, Georg Groh
Leveraging Advantages of Interactive and Non-Interactive Models for Vector-Based Cross-Lingual Information Retrieval
Linlong Xu, Baosong Yang, Xiaoyu Lv, Tianchi Bi, Dayiheng Liu, Haibo Zhang