Sparsity Constraint
Sparsity constraints in machine learning aim to reduce model complexity by limiting the number of non-zero parameters, thereby improving efficiency and potentially generalization. Current research focuses on developing efficient algorithms for imposing sparsity during training (e.g., using iterative pruning, gradient-based methods, and combinatorial optimization) and applying these techniques to various model architectures, including convolutional neural networks and large language models, within federated learning settings. This work is significant because it addresses the computational burden of large models, enhances privacy in distributed learning, and improves the interpretability of models by identifying important features.
Papers
December 21, 2022
November 1, 2022
October 14, 2022
October 6, 2022
July 15, 2022
July 8, 2022
May 30, 2022
May 24, 2022
April 13, 2022
February 24, 2022
January 7, 2022