Neural Network Weight
Neural network weights, the parameters encoding a model's learned knowledge, are a central focus of current research aiming to improve model efficiency, interpretability, and generalization. Active areas include developing methods for compressing weights (e.g., through quantization, pruning, and exploiting symmetries), learning representations of weight spaces to facilitate model generation and transfer learning, and understanding the relationship between weight distributions and model performance. These advancements are crucial for deploying larger, more powerful models while addressing challenges like catastrophic forgetting and improving the efficiency of training and inference.
Papers
February 28, 2024
February 6, 2024
September 30, 2023
September 28, 2023
September 21, 2023
August 10, 2023
August 7, 2023
June 7, 2023
May 29, 2023
February 24, 2023
January 8, 2023
November 24, 2022
November 14, 2022
October 24, 2022
October 8, 2022
September 29, 2022
September 1, 2022
July 31, 2022
May 17, 2022