Text Simplification
Text simplification aims to rewrite complex texts into easier-to-understand versions while preserving meaning, primarily benefiting individuals with cognitive impairments or limited literacy. Current research heavily utilizes large language models (LLMs), such as T5, BART, and GPT variants, often employing techniques like fine-tuning, prompt engineering, and constrained decoding to improve simplification accuracy and control readability. This field is crucial for enhancing accessibility to information across various domains, from medical reports to educational materials, and ongoing work focuses on developing better evaluation metrics and addressing challenges like information loss and the need for diverse, high-quality training data.
Papers
SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages
Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming Xiong, Chien-Sheng Wu
DEPLAIN: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
Regina Stodden, Omar Momen, Laura Kallmeyer