Human Generated
Research on human-generated content focuses on distinguishing human-created text, images, and audio from AI-generated counterparts, driven by concerns about misinformation and the ethical implications of increasingly sophisticated generative models. Current research employs various machine learning techniques, including large language models (LLMs) and deep neural networks, to analyze textual features, visual patterns, and audio characteristics to improve detection accuracy. This field is crucial for developing robust methods to identify AI-generated content, safeguarding against malicious use and ensuring the authenticity of information across various domains, from news media to education.
Papers
ActiveAED: A Human in the Loop Improves Annotation Error Detection
Leon Weber, Barbara Plank
Human or Not? A Gamified Approach to the Turing Test
Daniel Jannai, Amos Meron, Barak Lenz, Yoav Levine, Yoav Shoham
Automatic Discrimination of Human and Neural Machine Translation in Multilingual Scenarios
Malina Chichirau, Rik van Noord, Antonio Toral