Unseen Language
Unseen language research focuses on enabling machine learning models to process and generate text and speech in languages not included in their training data. Current efforts concentrate on adapting existing multilingual models, often leveraging transformer networks and diffusion models, through techniques like zero-shot learning, few-shot learning, and in-context learning with linguistic resources such as dictionaries and grammars. This work is crucial for bridging the digital divide, preserving linguistic diversity, and advancing applications like machine translation, speech recognition, and text-to-speech in low-resource and endangered languages.
Papers
October 27, 2024
October 21, 2024
October 5, 2024
September 27, 2024
September 1, 2024
August 5, 2024
July 25, 2024
July 4, 2024
April 29, 2024
April 16, 2024
March 12, 2024
February 29, 2024
February 28, 2024
February 16, 2024
November 14, 2023
November 2, 2023
October 25, 2023
October 20, 2023
October 12, 2023