Human Written Text

Research on human-written text currently focuses on distinguishing it from AI-generated text, driven by concerns about misinformation and plagiarism. This involves developing sophisticated detection methods, often leveraging large language models (LLMs) and algorithms like BERT and XGBoost, to analyze linguistic features, coherence, and information density. The ability to reliably differentiate human and AI-generated text has significant implications for various fields, including education, journalism, and legal contexts, impacting how we assess authenticity and authorship.

Papers