Shot Cross Lingual
Shot cross-lingual transfer learning aims to leverage multilingual language models to perform natural language processing tasks in low-resource languages using limited labeled data. Current research focuses on improving few-shot learning techniques, such as instruction tuning and parameter-efficient fine-tuning (e.g., LoRA), and exploring optimal data selection strategies to maximize performance with minimal training examples. This field is significant because it addresses the critical need for NLP capabilities in languages with limited resources, enabling broader access to technology and facilitating cross-lingual research across diverse linguistic contexts.
Papers
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, Hannaneh Hajishirzi
Meta-learning For Vision-and-language Cross-lingual Transfer
Hanxu Hu, Frank Keller