Human Generated
Research on human-generated content focuses on distinguishing human-created text, images, and audio from AI-generated counterparts, driven by concerns about misinformation and the ethical implications of increasingly sophisticated generative models. Current research employs various machine learning techniques, including large language models (LLMs) and deep neural networks, to analyze textual features, visual patterns, and audio characteristics to improve detection accuracy. This field is crucial for developing robust methods to identify AI-generated content, safeguarding against malicious use and ensuring the authenticity of information across various domains, from news media to education.
Papers
Agent S: An Open Agentic Framework that Uses Computers Like a Human
Saaket Agashe, Jiuzhou Han, Shuyu Gan, Jiachen Yang, Ang Li, Xin Eric Wang
Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic Analysis of Annotators and Targets
Tommaso Giorgi, Lorenzo Cima, Tiziano Fagni, Marco Avvenuti, Stefano Cresci