Text Quality
Assessing text quality is crucial for advancing natural language processing, particularly with the rise of large language models (LLMs). Current research focuses on developing more robust and human-aligned evaluation metrics, often leveraging LLMs themselves or ensemble methods combining different language models and n-gram approaches to better capture nuanced aspects of text quality like coherence, fluency, and faithfulness. These improvements are vital for enhancing the reliability of LLM-generated content and for optimizing the training of future models by enabling more effective filtering of training data and improving the efficiency of the training process.
Papers
November 2, 2024
October 10, 2024
September 25, 2024
September 15, 2024
August 12, 2024
July 19, 2024
July 17, 2024
April 30, 2024
April 26, 2024
March 26, 2024
March 13, 2024
February 21, 2024
February 17, 2024
February 13, 2024
October 9, 2023
September 29, 2023
September 25, 2023
September 23, 2023
August 17, 2023
July 6, 2023