Sparsity Constraint
Sparsity constraints in machine learning aim to reduce model complexity by limiting the number of non-zero parameters, thereby improving efficiency and potentially generalization. Current research focuses on developing efficient algorithms for imposing sparsity during training (e.g., using iterative pruning, gradient-based methods, and combinatorial optimization) and applying these techniques to various model architectures, including convolutional neural networks and large language models, within federated learning settings. This work is significant because it addresses the computational burden of large models, enhances privacy in distributed learning, and improves the interpretability of models by identifying important features.
Papers
November 12, 2024
October 12, 2024
September 11, 2024
June 27, 2024
June 17, 2024
June 12, 2024
May 30, 2024
May 15, 2024
May 9, 2024
April 6, 2024
March 27, 2024
March 16, 2024
February 13, 2024
August 31, 2023
June 16, 2023
June 15, 2023
June 13, 2023
June 12, 2023
February 28, 2023