Paper ID: 2305.13865

Selective Pre-training for Private Fine-tuning

Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang

Suppose we want to train text prediction models in email clients or word processors. These models, which serve billions of predictions per hour, must preserve the privacy of user data and adhere to specific model size constraints to meet memory, inference time requirements, and to reduce inference cost. Building small, fast, and private domain-specific language models is a thriving area of research. In this work, we show that a careful pre-training on a {\em subset} of the public dataset that is guided by the private dataset is crucial to train small DP language models. On standard benchmarks, models trained with our new framework achieve state-of-the-art performance, improving upon all the baselines from the literature. Besides performance improvements, our framework also shows that with careful pre-training and private fine-tuning, smaller models can match the performance of much larger models that do not have access to private data, highlighting the promise of private learning as a tool for model compression and efficiency. In many applications such as health care, finance, etc., private datasets are usually of much higher quality than public datasets, and our work shows novel ways of utilizing private datasets at all the stages of training pipe-line to improve deep learning efficiency. Language models based on our framework have been used in multiple real-world deployments serving billions of predictions per day (and saving millions of dollars in terms of inference cost) highlighting the general applicability of our framework beyond academic benchmarks.

Submitted: May 23, 2023