Weight Selection
Weight selection in neural networks focuses on optimizing the use of network weights during training and deployment, aiming to improve model performance, efficiency, and fairness. Current research explores various strategies, including selecting weights from larger pre-trained models for initialization, employing dynamic weighting schemes to balance contributions from multiple tasks or clients in federated learning, and pruning weights to reduce power consumption in resource-constrained settings. These advancements are significant for improving the efficiency and robustness of neural networks across diverse applications, from image classification and object detection to personalized healthcare and resource-limited devices.
Papers
October 29, 2024
August 19, 2024
November 30, 2023
October 14, 2023
September 21, 2023
March 24, 2023
October 7, 2022
August 23, 2022
June 22, 2022