Zero Shot Multilingual
Zero-shot multilingual models aim to perform tasks across multiple languages without explicit training on each language pair, leveraging pre-trained multilingual encoders and techniques like prompt engineering and data augmentation. Current research focuses on mitigating issues like off-target translations (incorrect language output) and improving performance through probability calibration and cross-lingual consistency regularization, often employing transformer-based architectures. These advancements are significant because they address the scarcity of resources for low-resource languages and enable efficient development of multilingual applications in areas like machine translation and speech synthesis.
Papers
December 22, 2023
October 8, 2023
May 18, 2023
May 12, 2023
October 1, 2022
September 9, 2022
April 11, 2022