View Translation
View translation encompasses the automated conversion of information between different modalities (e.g., text, speech, images, sign language) and languages, aiming to bridge communication gaps across diverse forms of expression. Current research emphasizes improving translation accuracy and efficiency using large language models (LLMs), exploring techniques like contrastive preference optimization, attention mechanism refinements, and multi-source pivoting, often within specific architectures such as transformers and Conformers. This field is crucial for advancing multilingual natural language processing, enabling broader access to information and facilitating cross-cultural communication in various applications, including healthcare, education, and cybersecurity.
Papers
Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models
Zhiwei He, Binglin Zhou, Hongkun Hao, Aiwei Liu, Xing Wang, Zhaopeng Tu, Zhuosheng Zhang, Rui Wang
Leveraging Translation For Optimal Recall: Tailoring LLM Personalization With User Profiles
Karthik Ravichandran, Sarmistha Sarna Gomasta
How do Hyenas deal with Human Speech? Speech Recognition and Translation with ConfHyena
Marco Gaido, Sara Papi, Matteo Negri, Luisa Bentivogli
Simpson's Paradox and the Accuracy-Fluency Tradeoff in Translation
Zheng Wei Lim, Ekaterina Vylomova, Trevor Cohn, Charles Kemp
OWSM-CTC: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification
Yifan Peng, Yui Sudo, Muhammad Shakeel, Shinji Watanabe