Language Capability
Research on language capability focuses on enhancing the multilingual and cross-lingual abilities of large language models (LLMs) and automatic speech recognition (ASR) systems. Current efforts concentrate on developing parameter-efficient fine-tuning methods, such as adapters and Elastic Weight Consolidation (EWC), to add new languages without sacrificing performance on existing ones, and on creating robust evaluation metrics to quantify LLM performance across diverse languages. These advancements are crucial for bridging the language gap in AI, improving access to technology for low-resource languages, and fostering more inclusive and equitable applications.
Papers
October 25, 2024
July 9, 2024
June 10, 2024
April 17, 2024
February 29, 2024
December 6, 2023
May 18, 2023