Paper ID: 2408.03544
NatLan: Native Language Prompting Facilitates Knowledge Elicitation Through Language Trigger Provision and Domain Trigger Retention
Baixuan Li, Yunlong Fan, Tianyi Ma, Zhiqiang Gao
Multilingual large language models (MLLMs) do not perform as well when answering questions in non-dominant languages as they do in their dominant languages. Although existing translate-then-answer methods alleviate this issue, the mechanisms behind their effectiveness remain unclear. In this study, we analogize the dominant language of MLLMs to the native language of humans and use two human cognitive features: the Language Trigger (LT) and the Domain Trigger (DT), to interpret the mechanisms behind translate-then-answer methods. This reveals that while sufficient LTs are provided by these methods, there remains a deficiency in DT retention. To mitigate this issue, we propose Native Language Prompting (NatLan), employing a Multi-MLLM collaboration strategy and introducing an additional role-enhanced domain-specific MLLM with stronger multilingual understanding capabilities as the translator. Across five language QA benchmarks, NatLan achieves up to a 31.28% improvement in accuracy and, compared to existing state-of-the-art methods, provides comparable or greater retention of DTs in up to 87% of cases. Our code is available at this https URL
Submitted: Aug 7, 2024