Cross Lingual Sequence
Cross-lingual sequence labeling aims to leverage labeled data from high-resource languages to improve performance on low-resource languages for tasks like named entity recognition and part-of-speech tagging. Current research focuses on adapting large multilingual language models through techniques like prompt engineering (e.g., decomposing prompts at the token level) and pseudo-labeling using cross-lingual acoustic models for speech recognition. These advancements are significant because they address the scarcity of labeled data in many languages, enabling more robust and widely applicable natural language processing systems.
Papers
January 29, 2024
May 19, 2023
October 23, 2022
October 13, 2022