Human Generated
Research on human-generated content focuses on distinguishing human-created text, images, and audio from AI-generated counterparts, driven by concerns about misinformation and the ethical implications of increasingly sophisticated generative models. Current research employs various machine learning techniques, including large language models (LLMs) and deep neural networks, to analyze textual features, visual patterns, and audio characteristics to improve detection accuracy. This field is crucial for developing robust methods to identify AI-generated content, safeguarding against malicious use and ensuring the authenticity of information across various domains, from news media to education.
Papers
Text and Audio Simplification: Human vs. ChatGPT
Gondy Leroy, David Kauchak, Philip Harber, Ankit Pal, Akash Shukla
It's Difficult to be Neutral -- Human and LLM-based Sentiment Annotation of Patient Comments
Petter Mæhlum, David Samuel, Rebecka Maria Norman, Elma Jelin, Øyvind Andresen Bjertnæs, Lilja Øvrelid, Erik Velldal