Neural Network Weight
Neural network weights, the parameters encoding a model's learned knowledge, are a central focus of current research aiming to improve model efficiency, interpretability, and generalization. Active areas include developing methods for compressing weights (e.g., through quantization, pruning, and exploiting symmetries), learning representations of weight spaces to facilitate model generation and transfer learning, and understanding the relationship between weight distributions and model performance. These advancements are crucial for deploying larger, more powerful models while addressing challenges like catastrophic forgetting and improving the efficiency of training and inference.
Papers
October 15, 2024
October 7, 2024
October 2, 2024
September 27, 2024
September 26, 2024
September 8, 2024
August 28, 2024
August 10, 2024
July 17, 2024
July 16, 2024
July 1, 2024
June 29, 2024
June 12, 2024
May 24, 2024
March 18, 2024
February 29, 2024
February 28, 2024
February 6, 2024
September 30, 2023
September 28, 2023