Gradient Pruning
Gradient pruning is a model compression technique aiming to improve the efficiency and privacy of deep learning models by selectively removing less important gradient information during training. Current research focuses on applying this technique to various architectures, including large language models and quantum neural networks, and explores its effectiveness in defending against gradient inversion attacks in federated learning. This approach offers significant potential for reducing computational costs in training and inference, enhancing privacy in collaborative learning settings, and accelerating the deployment of large-scale models across diverse applications.
Papers
November 6, 2024
January 30, 2024
March 7, 2023
July 8, 2022
May 17, 2022
February 26, 2022