Multilingual Training
Multilingual training aims to improve the performance and inclusivity of machine learning models by training them on data from multiple languages simultaneously. Current research focuses on mitigating biases amplified by monolingual training, enhancing performance in low-resource languages through techniques like cross-lingual transfer learning and knowledge distillation, and optimizing model architectures (e.g., transformer-based models) for efficient multilingual processing. This approach is significant because it addresses data scarcity issues in many languages, leading to more robust and equitable AI systems across various applications, including text-to-speech, machine translation, and sentiment analysis.
Papers
October 11, 2024
September 2, 2024
July 8, 2024
June 7, 2024
May 9, 2024
April 26, 2024
April 2, 2024
February 21, 2024
January 19, 2024
October 23, 2023
October 5, 2023
May 25, 2023
May 19, 2023
May 18, 2023
September 26, 2022
September 14, 2022
July 11, 2022
January 24, 2022