LD Pruner
LD Pruner, encompassing various pruning techniques for different neural network architectures, aims to reduce the computational cost and memory footprint of large models like Latent Diffusion Models (LDMs), Vision Transformers, and Large Language Models (LLMs) while preserving performance. Current research focuses on developing task-agnostic pruning methods that leverage model structure and gradient information to identify and remove less critical components, often incorporating differentiable operators and explainability-aware mechanisms. This research is significant because efficient model compression is crucial for deploying advanced AI models on resource-constrained devices and accelerating training and inference times, impacting both scientific research and practical applications.