Generated Text
Generated text research focuses on understanding and mitigating the challenges posed by increasingly sophisticated large language models (LLMs) producing human-quality text. Current efforts concentrate on detecting machine-generated text, often employing techniques like latent-space analysis and fine-tuned transformer models (e.g., RoBERTa, DeBERTa) to identify subtle differences in writing style and structure between human and AI-generated content. This field is crucial for addressing concerns about misinformation, plagiarism, and authenticity, impacting various domains from education and journalism to legal and scientific publishing.
Papers
January 2, 2024
December 12, 2023
November 29, 2023
November 21, 2023
October 25, 2023
How well can machine-generated texts be identified and can language models be trained to avoid identification?
Sinclair Schneider, Florian Steuber, Joao A. G. Schneider, Gabi Dreo Rodosek
CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts
Aviya Maimon, Reut Tsarfaty
October 23, 2023
October 20, 2023
October 8, 2023
September 14, 2023
September 9, 2023
September 2, 2023
June 17, 2023
May 30, 2023
May 24, 2023
May 22, 2023
May 21, 2023
May 12, 2023
April 16, 2023