Paper ID: 2210.12910
Specializing Multi-domain NMT via Penalizing Low Mutual Information
Jiyoung Lee, Hantae Kim, Hyunchang Cho, Edward Choi, Cheonbok Park
Multi-domain Neural Machine Translation (NMT) trains a single model with multiple domains. It is appealing because of its efficacy in handling multiple domains within one model. An ideal multi-domain NMT should learn distinctive domain characteristics simultaneously, however, grasping the domain peculiarity is a non-trivial task. In this paper, we investigate domain-specific information through the lens of mutual information (MI) and propose a new objective that penalizes low MI to become higher. Our method achieved the state-of-the-art performance among the current competitive multi-domain NMT models. Also, we empirically show our objective promotes low MI to be higher resulting in domain-specialized multi-domain NMT.
Submitted: Oct 24, 2022