Cross Lingual Model
Cross-lingual models aim to build language understanding systems capable of handling multiple languages simultaneously, overcoming data scarcity issues in low-resource languages. Current research focuses on improving cross-lingual transfer learning through techniques like knowledge distillation, multilingual instruction tuning, and contextual label projection, often employing transformer-based architectures like BERT and mT5. These advancements are significant for expanding access to NLP technologies across diverse linguistic communities and enabling applications like cross-lingual question answering, machine translation, and sentiment analysis in a wider range of languages.
Papers
Cognition-aware Cognate Detection
Diptesh Kanojia, Prashant Sharma, Sayali Ghodekar, Pushpak Bhattacharyya, Gholamreza Haffari, Malhar Kulkarni
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains
Jaromir Savelka, Hannes Westermann, Karim Benyekhlef, Charlotte S. Alexander, Jayla C. Grant, David Restrepo Amariles, Rajaa El Hamdani, Sébastien Meeùs, Michał Araszkiewicz, Kevin D. Ashley, Alexandra Ashley, Karl Branting, Mattia Falduti, Matthias Grabmair, Jakub Harašta, Tereza Novotná, Elizabeth Tippett, Shiwanni Johnson