Paper ID: 2311.01873
Efficient Black-Box Adversarial Attacks on Neural Text Detectors
Vitalii Fishchuk, Daniel Braun
Neural text detectors are models trained to detect whether a given text was generated by a language model or written by a human. In this paper, we investigate three simple and resource-efficient strategies (parameter tweaking, prompt engineering, and character-level mutations) to alter texts generated by GPT-3.5 that are unsuspicious or unnoticeable for humans but cause misclassification by neural text detectors. The results show that especially parameter tweaking and character-level mutations are effective strategies.
Submitted: Nov 3, 2023