Weight Sharing
Weight sharing, a technique where multiple parts of a neural network share the same parameters, aims to improve efficiency and performance by reducing model size and computational cost. Current research focuses on applying weight sharing to various architectures, including transformers, recurrent neural networks (RNNs), and convolutional neural networks (CNNs), often within the context of federated learning, continual learning, and neural architecture search. This approach offers significant advantages in resource-constrained environments and large-scale applications, impacting both the efficiency of training and the deployment of deep learning models.
Papers
June 17, 2024
April 25, 2024
March 23, 2024
February 19, 2024
December 15, 2023
December 13, 2023
November 16, 2023
November 6, 2023
October 12, 2023
October 4, 2023
October 3, 2023
July 5, 2023
January 26, 2023
January 12, 2023
November 29, 2022
September 30, 2022
September 1, 2022
June 3, 2022
April 13, 2022