Paper ID: 2406.12702
[WIP] Jailbreak Paradox: The Achilles' Heel of LLMs
Abhinav Rao, Monojit Choudhury, Somak Aditya
We introduce two paradoxes concerning jailbreak of foundation models: First, it is impossible to construct a perfect jailbreak classifier, and second, a weaker model cannot consistently detect whether a stronger (in a pareto-dominant sense) model is jailbroken or not. We provide formal proofs for these paradoxes and a short case study on Llama and GPT4-o to demonstrate this. We discuss broader theoretical and practical repercussions of these results.
Submitted: Jun 18, 2024