Pruned Model
Pruned models aim to reduce the size and computational cost of deep neural networks (DNNs) by removing less important parameters or neurons, while preserving accuracy. Current research focuses on developing efficient pruning algorithms, including structured pruning methods for large language models (LLMs) and vision transformers, and exploring techniques like optimal transport for mitigating security vulnerabilities in pruned models. This work is significant because it addresses the growing need for deploying DNNs on resource-constrained devices and improves the efficiency and security of various machine learning applications.
15papers
Papers
February 19, 2025
January 25, 2025
January 17, 2025
January 2, 2025
December 29, 2024
November 21, 2024
November 16, 2024
October 18, 2024
August 28, 2024