Sentence Simplification
Sentence simplification aims to rewrite complex text into easier-to-understand versions while preserving meaning, primarily benefiting individuals with reading difficulties or language learners. Current research focuses on improving the accuracy and fluency of simplification using various approaches, including large language models (LLMs) like GPT-4 and transformer-based sequence-to-sequence models, often incorporating techniques like edit-constrained decoding and lexical paraphrasing. These advancements are crucial for enhancing accessibility to information across diverse fields, from medical reports to educational materials, and for developing more robust evaluation metrics that accurately assess both simplification and meaning preservation. The field is also actively developing new datasets and evaluation metrics to better assess model performance and address challenges like factuality preservation.
Papers
SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages
Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu
DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
Regina Stodden, Omar Momen, Laura Kallmeyer
Extending Process Discovery with Model Complexity Optimization and Cyclic States Identification: Application to Healthcare Processes
Liubov O. Elkhovskaya, Alexander D. Kshenin, Marina A. Balakhontceva, Sergey V. Kovalchuk
Unsupervised Sentence Simplification via Dependency Parsing
Vy Vo, Weiqing Wang, Wray Buntine