Text Simplification
Text simplification aims to rewrite complex texts into easier-to-understand versions while preserving meaning, primarily benefiting individuals with cognitive impairments or limited literacy. Current research heavily utilizes large language models (LLMs), such as T5, BART, and GPT variants, often employing techniques like fine-tuning, prompt engineering, and constrained decoding to improve simplification accuracy and control readability. This field is crucial for enhancing accessibility to information across various domains, from medical reports to educational materials, and ongoing work focuses on developing better evaluation metrics and addressing challenges like information loss and the need for diverse, high-quality training data.
Papers
MANTIS at TSAR-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders
Xiaofei Li, Daniel Wiechmann, Yu Qiao, Elma Kerz
(Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Yu Qiao, Xiaofei Li, Daniel Wiechmann, Elma Kerz
LENS: A Learnable Evaluation Metric for Text Simplification
Mounica Maddela, Yao Dou, David Heineman, Wei Xu