Human Written Text
Research on human-written text currently focuses on distinguishing it from AI-generated text, driven by concerns about misinformation and plagiarism. This involves developing sophisticated detection methods, often leveraging large language models (LLMs) and algorithms like BERT and XGBoost, to analyze linguistic features, coherence, and information density. The ability to reliably differentiate human and AI-generated text has significant implications for various fields, including education, journalism, and legal contexts, impacting how we assess authenticity and authorship.
Papers
DAMAGE: Detecting Adversarially Modified AI Generated Text
Elyas Masrour, Bradley Emi, Max Spero
Leveraging Explainable AI for LLM Text Attribution: Differentiating Human-Written and Multiple LLMs-Generated Text
Ayat Najjar, Huthaifa I. Ashqar, Omar Darwish, Eman Hammad
Detecting AI-Generated Text in Educational Content: Leveraging Machine Learning and Explainable AI for Academic Integrity
Ayat A. Najjar, Huthaifa I. Ashqar, Omar A. Darwish, Eman Hammad
Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
Aneta Zugecova, Dominik Macko, Ivan Srba, Robert Moro, Jakub Kopal, Katarina Marcincinova, Matus Mesarcik
Are LLMs Good Literature Review Writers? Evaluating the Literature Review Writing Ability of Large Language Models
Xuemei Tang, Xufeng Duan, Zhenguang G. Cai