Private Fine Tuning

Private fine-tuning focuses on adapting pre-trained large language models (LLMs) and other deep learning models to specific tasks using private datasets while preserving data privacy. Current research emphasizes techniques like low-rank adaptation (e.g., LoRA), differential privacy (DP) mechanisms (including DP-SGD and novel noise addition strategies), and zeroth-order optimization to minimize privacy leakage while maintaining model accuracy. This field is crucial for enabling the use of powerful models in sensitive domains like healthcare and finance, where data privacy is paramount, and for improving the efficiency of training large models by reducing the number of trainable parameters.

Papers