Human Generated
Research on human-generated content focuses on distinguishing human-created text, images, and audio from AI-generated counterparts, driven by concerns about misinformation and the ethical implications of increasingly sophisticated generative models. Current research employs various machine learning techniques, including large language models (LLMs) and deep neural networks, to analyze textual features, visual patterns, and audio characteristics to improve detection accuracy. This field is crucial for developing robust methods to identify AI-generated content, safeguarding against malicious use and ensuring the authenticity of information across various domains, from news media to education.
Papers
DeepCRCEval: Revisiting the Evaluation of Code Review Comment Generation
Junyi Lu, Xiaojia Li, Zihan Hua, Lei Yu, Shiqi Cheng, Li Yang, Fengjun Zhang, Chun Zuo
GenAI Content Detection Task 2: AI vs. Human -- Academic Essay Authenticity Challenge
Shammur Absar Chowdhury, Hind Almerekhi, Mucahid Kutlu, Kaan Efe Keles, Fatema Ahmad, Tasnim Mohiuddin, George Mikros, Firoj Alam
CausalMob: Causal Human Mobility Prediction with LLMs-derived Human Intentions toward Public Events
Xiaojie Yang, Hangli Ge, Jiawei Wang, Zipei Fan, Renhe Jiang, Ryosuke Shibasaki, Noboru Koshizuka
Misalignment of Semantic Relation Knowledge between WordNet and Human Intuition
Zhihan Cao, Hiroaki Yamada, Simone Teufel, Takenobu Tokunaga