Sentence Level
Sentence-level analysis in natural language processing focuses on understanding and processing individual sentences within larger texts, aiming to improve various downstream tasks. Current research emphasizes developing robust sentence representations using techniques like multi-task learning, transformer architectures (e.g., RoBERTa), and incorporating both sentence- and token-level objectives to capture finer-grained information. This work is crucial for advancing applications such as machine translation, lexicography, and automated essay scoring, where accurate sentence-level understanding is essential for achieving high performance and improving human-computer interaction.
Papers
Improving Explainability of Sentence-level Metrics via Edit-level Attribution for Grammatical Error Correction
Takumi Goto, Justin Vasselli, Taro Watanabe
Detecting Document-level Paraphrased Machine Generated Content: Mimicking Human Writing Style and Involving Discourse Features
Yupei Li, Manuel Milling, Lucia Specia, Björn W. Schuller
EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation
Taeho Hwang, Sukmin Cho, Soyeong Jeong, Hoyun Song, SeungYoon Han, Jong C. Park
DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory
Yutong Wang, Jiali Zeng, Xuebo Liu, Derek F. Wong, Fandong Meng, Jie Zhou, Min Zhang
Modeling User Preferences with Automatic Metrics: Creating a High-Quality Preference Dataset for Machine Translation
Sweta Agrawal, José G. C. de Souza, Ricardo Rei, António Farinhas, Gonçalo Faria, Patrick Fernandes, Nuno M Guerreiro, Andre Martins
A Multi-task Learning Framework for Evaluating Machine Translation of Emotion-loaded User-generated Content
Shenbin Qian, Constantin Orăsan, Diptesh Kanojia, Félix do Carmo
Generating bilingual example sentences with large language models as lexicography assistants
Raphael Merx, Ekaterina Vylomova, Kemal Kurniawan