Private Fine Tuning
Private fine-tuning focuses on adapting pre-trained large language models (LLMs) and other deep learning models to specific tasks using private datasets while preserving data privacy. Current research emphasizes techniques like low-rank adaptation (e.g., LoRA), differential privacy (DP) mechanisms (including DP-SGD and novel noise addition strategies), and zeroth-order optimization to minimize privacy leakage while maintaining model accuracy. This field is crucial for enabling the use of powerful models in sensitive domains like healthcare and finance, where data privacy is paramount, and for improving the efficiency of training large models by reducing the number of trainable parameters.
Papers
November 11, 2024
September 26, 2024
July 24, 2024
June 7, 2024
June 3, 2024
May 29, 2024
March 7, 2024
February 29, 2024
February 14, 2024
January 12, 2024
January 9, 2024
November 7, 2023
October 14, 2023
May 30, 2023
May 23, 2023
November 20, 2022
October 5, 2022
September 30, 2022
July 1, 2022