Neural Network Weight
Neural network weights, the parameters encoding a model's learned knowledge, are a central focus of current research aiming to improve model efficiency, interpretability, and generalization. Active areas include developing methods for compressing weights (e.g., through quantization, pruning, and exploiting symmetries), learning representations of weight spaces to facilitate model generation and transfer learning, and understanding the relationship between weight distributions and model performance. These advancements are crucial for deploying larger, more powerful models while addressing challenges like catastrophic forgetting and improving the efficiency of training and inference.
Papers
September 21, 2023
August 10, 2023
August 7, 2023
June 7, 2023
May 29, 2023
February 24, 2023
January 8, 2023
November 24, 2022
November 14, 2022
October 24, 2022
October 8, 2022
September 29, 2022
September 1, 2022
July 31, 2022
May 17, 2022
April 30, 2022
March 10, 2022
February 19, 2022
February 2, 2022