Text Simplification
Text simplification aims to rewrite complex texts into easier-to-understand versions while preserving meaning, primarily benefiting individuals with cognitive impairments or limited literacy. Current research heavily utilizes large language models (LLMs), such as T5, BART, and GPT variants, often employing techniques like fine-tuning, prompt engineering, and constrained decoding to improve simplification accuracy and control readability. This field is crucial for enhancing accessibility to information across various domains, from medical reports to educational materials, and ongoing work focuses on developing better evaluation metrics and addressing challenges like information loss and the need for diverse, high-quality training data.
Papers
Data and Approaches for German Text simplification -- towards an Accessibility-enhanced Communication
Thorben Schomacker, Michael Gille, Jörg von der Hülls, Marina Tropmann-Frick
Exploring Automatic Text Simplification of German Narrative Documents
Thorben Schomacker, Tillmann Dönicke, Marina Tropmann-Frick
A Novel Dataset for Financial Education Text Simplification in Spanish
Nelson Perez-Rojas, Saul Calderon-Ramirez, Martin Solis-Salazar, Mario Romero-Sandoval, Monica Arias-Monge, Horacio Saggion
Do Text Simplification Systems Preserve Meaning? A Human Evaluation via Reading Comprehension
Sweta Agrawal, Marine Carpuat
Investigating Large Language Models and Control Mechanisms to Improve Text Readability of Biomedical Abstracts
Zihao Li, Samuel Belkadi, Nicolo Micheletti, Lifeng Han, Matthew Shardlow, Goran Nenadic
Is it Possible to Modify Text to a Target Readability Level? An Initial Investigation Using Zero-Shot Large Language Models
Asma Farajidizaji, Vatsal Raina, Mark Gales