Sentence Level
Sentence-level analysis in natural language processing focuses on understanding and processing individual sentences within larger texts, aiming to improve various downstream tasks. Current research emphasizes developing robust sentence representations using techniques like multi-task learning, transformer architectures (e.g., RoBERTa), and incorporating both sentence- and token-level objectives to capture finer-grained information. This work is crucial for advancing applications such as machine translation, lexicography, and automated essay scoring, where accurate sentence-level understanding is essential for achieving high performance and improving human-computer interaction.
Papers
DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory
Yutong Wang, Jiali Zeng, Xuebo Liu, Derek F. Wong, Fandong Meng, Jie Zhou, Min Zhang
Modeling User Preferences with Automatic Metrics: Creating a High-Quality Preference Dataset for Machine Translation
Sweta Agrawal, José G. C. de Souza, Ricardo Rei, António Farinhas, Gonçalo Faria, Patrick Fernandes, Nuno M Guerreiro, Andre Martins
A Multi-task Learning Framework for Evaluating Machine Translation of Emotion-loaded User-generated Content
Shenbin Qian, Constantin Orăsan, Diptesh Kanojia, Félix do Carmo
Generating bilingual example sentences with large language models as lexicography assistants
Raphael Merx, Ekaterina Vylomova, Kemal Kurniawan
Proofread: Fixes All Errors with One Tap
Renjie Liu, Yanxiang Zhang, Yun Zhu, Haicheng Sun, Yuanbo Zhang, Michael Xuelin Huang, Shanqing Cai, Lei Meng, Shumin Zhai
Pointer-Guided Pre-Training: Infusing Large Language Models with Paragraph-Level Contextual Awareness
Lars Hillebrand, Prabhupad Pradhan, Christian Bauckhage, Rafet Sifa