Deep Neural Network Parameter
Deep neural network (DNN) parameters are the internal weights and biases that determine a DNN's behavior, and research focuses on optimizing their size, efficiency, and robustness. Current efforts concentrate on techniques like model pruning, quantization, and efficient training algorithms (e.g., AdamW with weight prediction), often applied to architectures such as ResNet and MobileNet, to reduce computational costs and improve performance. These advancements are crucial for deploying DNNs on resource-constrained devices (edge computing) and enhancing model security against adversarial attacks, ultimately impacting various fields from image recognition to power grid monitoring.
Papers
October 8, 2024
July 15, 2024
June 10, 2024
April 30, 2024
February 12, 2024
February 1, 2024
December 15, 2023
November 18, 2023
October 17, 2023
October 4, 2023
September 13, 2023
August 21, 2023
June 30, 2023
April 12, 2023
March 20, 2023
February 1, 2023
November 10, 2022
July 25, 2022
March 1, 2022
November 19, 2021