Paper ID: 2412.05346

BadGPT-4o: stripping safety finetuning from GPT models

Ekaterina Krupkina, Dmitrii Volkov

We show a version of Qi et al. 2023's simple fine-tuning poisoning technique strips GPT-4o's safety guardrails without degrading the model. The BadGPT attack matches best white-box jailbreaks on HarmBench and StrongREJECT. It suffers no token overhead or performance hits common to jailbreaks, as evaluated on tinyMMLU and open-ended generations. Despite having been known for a year, this attack remains easy to execute.

Submitted: Dec 6, 2024