Generated Text
Generated text research focuses on understanding and mitigating the challenges posed by increasingly sophisticated large language models (LLMs) producing human-quality text. Current efforts concentrate on detecting machine-generated text, often employing techniques like latent-space analysis and fine-tuned transformer models (e.g., RoBERTa, DeBERTa) to identify subtle differences in writing style and structure between human and AI-generated content. This field is crucial for addressing concerns about misinformation, plagiarism, and authenticity, impacting various domains from education and journalism to legal and scientific publishing.
Papers
May 20, 2024
May 17, 2024
May 16, 2024
April 22, 2024
April 8, 2024
April 1, 2024
March 28, 2024
March 18, 2024
February 25, 2024
February 19, 2024
February 16, 2024
January 22, 2024
January 8, 2024
January 2, 2024
December 12, 2023
November 29, 2023
November 21, 2023
October 25, 2023
How well can machine-generated texts be identified and can language models be trained to avoid identification?
Sinclair Schneider, Florian Steuber, Joao A. G. Schneider, Gabi Dreo Rodosek
CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts
Aviya Maimon, Reut Tsarfaty