Machine Generated
Machine-generated text detection focuses on distinguishing computer-generated content from human-written text, driven by the increasing sophistication of large language models (LLMs). Current research emphasizes developing robust and generalizable detection methods, often employing transformer-based architectures and exploring techniques like watermarking, rewriting analysis, and multi-modal approaches (combining text, image, and audio data). This field is crucial for mitigating the risks of misinformation, plagiarism, and other forms of malicious use of LLMs, impacting various sectors including journalism, education, and online content moderation.
Papers
On the Zero-Shot Generalization of Machine-Generated Text Detectors
Xiao Pu, Jingyu Zhang, Xiaochuang Han, Yulia Tsvetkov, Tianxing He
Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature
Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, Yue Zhang