Zero Shot Cross Lingual
Zero-shot cross-lingual natural language processing (NLP) aims to build models capable of understanding and generating text in multiple languages without requiring training data for each target language. Current research focuses on leveraging multilingual pre-trained language models (like mT5, XLM-R, and others), exploring techniques such as in-context learning, prompt engineering, and data augmentation strategies (including code-switching and pseudo-semantic data generation) to improve cross-lingual transfer. This field is crucial for bridging the language gap in NLP applications, enabling broader access to information and technology across diverse linguistic communities.
Papers
The Model Arena for Cross-lingual Sentiment Analysis: A Comparative Study in the Era of Large Language Models
Xiliang Zhu, Shayna Gardiner, Tere Roldán, David Rossouw
SSP: Self-Supervised Prompting for Cross-Lingual Transfer to Low-Resource Languages using Large Language Models
Vipul Rathore, Aniruddha Deb, Ankish Chandresh, Parag Singla, Mausam