Paper ID: 2401.00870
ConfusionPrompt: Practical Private Inference for Online Large Language Models
Peihua Mai, Ran Yan, Rui Ye, Youjia Yang, Yinchuan Li, Yan Pang
State-of-the-art large language models (LLMs) are commonly deployed as online services, necessitating users to transmit informative prompts to cloud servers, thus engendering substantial privacy concerns. In response, we present ConfusionPrompt, a novel private LLM inference framework designed to obfuscate the server by: (i) decomposing the prompt into sub-prompts, and (ii) generating pseudo prompts along with the genuine sub-prompts as input to the online LLM. Eventually, the returned responses can be recomposed by the user to obtain the final whole response. Such designs endows our framework with advantages over previous protocols that (i) it can be seamlessly integrated with existing black-box LLMs, and (ii) it achieves significantly better privacy-utility trade-off than existing text perturbation-based methods. We develop a $(\lambda, \mu, \rho)$-privacy model to formulate the requirement for a privacy-preserving group of prompts, and provide a complexity analysis, affirming ConfusionPrompt's efficiency. Our empirical evaluation reveals that our method offers significantly higher utility compared to local inference methods using open-source models and perturbation-based techniques, while also requiring much less memory than open-source LLMs.
Submitted: Dec 30, 2023