Language Specific
Language-specific research in artificial intelligence focuses on improving the performance and efficiency of models across diverse languages, addressing challenges posed by linguistic differences and limited resources for some languages. Current research emphasizes developing models that leverage both shared and language-specific knowledge, often employing Mixture-of-Experts architectures, sparse training techniques, and language-adaptive inference methods to achieve this balance. This work is significant because it enables more inclusive and effective AI applications, particularly in areas like machine translation, speech recognition, and natural language understanding, where language diversity is crucial.
Papers
An Agentic Approach to Automatic Creation of P&ID Diagrams from Natural Language Descriptions
Shreeyash Gowaikar, Srinivasan Iyengar, Sameer Segal, Shivkumar Kalyanaraman
CAMEL: Cross-Attention Enhanced Mixture-of-Experts and Language Bias for Code-Switching Speech Recognition
He Wang, Xucheng Wan, Naijun Zheng, Kai Liu, Huan Zhou, Guojian Li, Lei Xie
Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT
Jenny Kunz
Bilingual BSARD: Extending Statutory Article Retrieval to Dutch
Ehsan Lotfi, Nikolay Banar, Nerses Yuzbashyan, Walter Daelemans
The Rise and Down of Babel Tower: Investigating the Evolution Process of Multilingual Code Large Language Model
Jiawei Chen, Wentao Chen, Jing Su, Jingjing Xu, Hongyu Lin, Mengjie Ren, Yaojie Lu, Xianpei Han, Le Sun