Language Capability

Research on language capability focuses on enhancing the multilingual and cross-lingual abilities of large language models (LLMs) and automatic speech recognition (ASR) systems. Current efforts concentrate on developing parameter-efficient fine-tuning methods, such as adapters and Elastic Weight Consolidation (EWC), to add new languages without sacrificing performance on existing ones, and on creating robust evaluation metrics to quantify LLM performance across diverse languages. These advancements are crucial for bridging the language gap in AI, improving access to technology for low-resource languages, and fostering more inclusive and equitable applications.

Papers