Cross Lingual Classification
Cross-lingual classification aims to build models that can accurately categorize text across multiple languages, overcoming the limitations of monolingual approaches. Current research explores various strategies, including leveraging machine translation, developing multilingual models (like those based on transformers), and employing techniques such as prompting and mutual information maximization to improve cross-lingual transfer and address issues like low-resource languages and topic coherence. These advancements are significant for enabling broader access to NLP applications and facilitating cross-cultural research in diverse fields, such as healthcare and social sciences, where multilingual data is prevalent.
Papers
January 7, 2025
May 23, 2023
April 7, 2023
October 25, 2022
September 26, 2022